1

我正在尝试使用 R 对大型数据集进行更简单的分析(素食包) ;我在具有较小数据集的本地机器(10 核,16GB 内存)上运行它取得了一些成功。然而,当我扩展我的分析以包含更大的数据集时,代码以如下错误终止:

error: cannot allocate vector of size XX gb

因此,我对 Amazon AWS 实例(更具体地说,一个 r3.8xlarge 实例:32 核,244GB ram)尝试了相同的分析,但我得到了同样的错误,这次最具体的是:

error: cannot allocate vector of size 105.4 gb

我尝试过的两个系统(本地和 AWS)都是 Ubuntu 机器,并且sessionInfo()

R version 3.0.2 (2013-09-25)
Platform: x86_64-pc-linux-gnu (64-bit)

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C
 [9] LC_ADDRESS=C               LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base

以下是我正在运行的相关代码行:

# read in data as DFs for mapping file
print("Loading mapping file...")
map_df = read.table(map, sep="\t", header=TRUE, strip.white=T)
rownames(map_df) = map_df[,1] # make first column the index so that we can join on it
map_df[,1] <- NULL # remove first column (we just turned it into the index)

# read in data as DF for biom file
print("Loading biom file...")
biom_df = data.frame(read.table(biom_file, sep="\t", header=TRUE), stringsAsFactors=FALSE)
biom_cols = dim(biom_df)[2] # number of columns in biom file, represents all the samples
otu_names <- as.vector(biom_df[,biom_cols]) # get otu taxonomy (last column) and save for later
biom_df[,biom_cols] <- NULL # remove taxonomy column
biom_df <- t(biom_df) # transpose to get OTUs as columns
biom_cols = dim(biom_df)[2] # number of columns in biom file, represents all the OTUs (now that we've transposed)

# merge our biom_df with map_df so that we reduce the samples down to those given in map_df
merged = merge(biom_df, map_df, by="row.names")
merged_cols = dim(merged)[2]

# clear some memory
rm(biom_df)
print("Total memory used:")
print(object.size(x=lapply(ls(), get)), units="Mb")


# simper analysis
print("Running simper analysis...")
sim <- simper(merged[,2:(biom_cols+1)], merged[,merged_cols], parallel=10)

有什么想法吗?

4

1 回答 1

0

根据您提供的信息,目前尚不清楚您的机器在哪一点内存不足。您似乎在分析中使用了基本 R 函数。您可能想尝试一下 data.table 包(查看比 read.table 快得多的 fread 函数)。

于 2015-01-06T05:30:59.577 回答