batch很好理解,就是batch size。注意在一个epoch中最后一个batch大小可能小于等于batch size
dataset.repeat就是俗称epoch,但在tf中与dataset.shuffle的使用顺序可能会导致个epoch的混合
dataset.shuffle就是说维持一个buffer size 大小的 shuffle buffer,图中所需的每个样本从shuffle buffer中获取,取得一个样本后,就从源数据集中加入一个样本到shuffle buffer中。
import os os.environ['CUDA_VISIBLE_DEVICES'] = "" import numpy as np import tensorflow as tf np.random.seed(0) x = np.random.sample((11,2)) # make a dataset from a numpy array print(x) print() dataset = tf.data.Dataset.from_tensor_slices(x) dataset = dataset.shuffle(3) dataset = dataset.batch(4) dataset = dataset.repeat(2) # create the iterator iter = dataset.make_one_shot_iterator() el = iter.get_next() with tf.Session() as sess: print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el))
#源数据集 [[ 0.5488135 0.71518937] [ 0.60276338 0.54488318] [ 0.4236548 0.64589411] [ 0.43758721 0.891773 ] [ 0.96366276 0.38344152] [ 0.79172504 0.52889492] [ 0.56804456 0.92559664] [ 0.07103606 0.0871293 ] [ 0.0202184 0.83261985] [ 0.77815675 0.87001215] [ 0.97861834 0.79915856]] # 通过shuffle batch后取得的样本 [[ 0.4236548 0.64589411] [ 0.60276338 0.54488318] [ 0.43758721 0.891773 ] [ 0.5488135 0.71518937]] [[ 0.96366276 0.38344152] [ 0.56804456 0.92559664] [ 0.0202184 0.83261985] [ 0.79172504 0.52889492]] [[ 0.07103606 0.0871293 ] [ 0.97861834 0.79915856] [ 0.77815675 0.87001215]] #最后一个batch样本个数为3 [[ 0.60276338 0.54488318] [ 0.5488135 0.71518937] [ 0.43758721 0.891773 ] [ 0.79172504 0.52889492]] [[ 0.4236548 0.64589411] [ 0.56804456 0.92559664] [ 0.0202184 0.83261985] [ 0.07103606 0.0871293 ]] [[ 0.77815675 0.87001215] [ 0.96366276 0.38344152] [ 0.97861834 0.79915856]] #最后一个batch样本个数为3
1、按照shuffle中设置的buffer size,首先从源数据集取得三个样本:
shuffle buffer:
[ 0.5488135 0.71518937]
[ 0.60276338 0.54488318]
[ 0.4236548 0.64589411]
2、从buffer中取一个样本到batch中得:
shuffle buffer:
[ 0.5488135 0.71518937]
[ 0.60276338 0.54488318]
batch:
[ 0.4236548 0.64589411]
3、shuffle buffer不足三个样本,从源数据集提取一个样本:
shuffle buffer:
[ 0.5488135 0.71518937]
[ 0.60276338 0.54488318]
[ 0.43758721 0.891773 ]
4、从buffer中取一个样本到batch中得:
shuffle buffer:
[ 0.5488135 0.71518937]
[ 0.43758721 0.891773 ]
batch:
[ 0.4236548 0.64589411]
[ 0.60276338 0.54488318]
5、如此反复。这就意味中如果shuffle 的buffer size=1,数据集不打乱。如果shuffle 的buffer size=数据集样本数量,随机打乱整个数据集
import os os.environ['CUDA_VISIBLE_DEVICES'] = "" import numpy as np import tensorflow as tf np.random.seed(0) x = np.random.sample((11,2)) # make a dataset from a numpy array print(x) print() dataset = tf.data.Dataset.from_tensor_slices(x) dataset = dataset.shuffle(1) dataset = dataset.batch(4) dataset = dataset.repeat(2) # create the iterator iter = dataset.make_one_shot_iterator() el = iter.get_next() with tf.Session() as sess: print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) [[ 0.5488135 0.71518937] [ 0.60276338 0.54488318] [ 0.4236548 0.64589411] [ 0.43758721 0.891773 ] [ 0.96366276 0.38344152] [ 0.79172504 0.52889492] [ 0.56804456 0.92559664] [ 0.07103606 0.0871293 ] [ 0.0202184 0.83261985] [ 0.77815675 0.87001215] [ 0.97861834 0.79915856]] [[ 0.5488135 0.71518937] [ 0.60276338 0.54488318] [ 0.4236548 0.64589411] [ 0.43758721 0.891773 ]] [[ 0.96366276 0.38344152] [ 0.79172504 0.52889492] [ 0.56804456 0.92559664] [ 0.07103606 0.0871293 ]] [[ 0.0202184 0.83261985] [ 0.77815675 0.87001215] [ 0.97861834 0.79915856]] [[ 0.5488135 0.71518937] [ 0.60276338 0.54488318] [ 0.4236548 0.64589411] [ 0.43758721 0.891773 ]] [[ 0.96366276 0.38344152] [ 0.79172504 0.52889492] [ 0.56804456 0.92559664] [ 0.07103606 0.0871293 ]] [[ 0.0202184 0.83261985] [ 0.77815675 0.87001215] [ 0.97861834 0.79915856]]
注意如果repeat在shuffle之前使用:
官方说repeat在shuffle之前使用能提高性能,但模糊了数据样本的epoch关系
import os os.environ['CUDA_VISIBLE_DEVICES'] = "" import numpy as np import tensorflow as tf np.random.seed(0) x = np.random.sample((11,2)) # make a dataset from a numpy array print(x) print() dataset = tf.data.Dataset.from_tensor_slices(x) dataset = dataset.repeat(2) dataset = dataset.shuffle(11) dataset = dataset.batch(4) # create the iterator iter = dataset.make_one_shot_iterator() el = iter.get_next() with tf.Session() as sess: print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) [[ 0.5488135 0.71518937] [ 0.60276338 0.54488318] [ 0.4236548 0.64589411] [ 0.43758721 0.891773 ] [ 0.96366276 0.38344152] [ 0.79172504 0.52889492] [ 0.56804456 0.92559664] [ 0.07103606 0.0871293 ] [ 0.0202184 0.83261985] [ 0.77815675 0.87001215] [ 0.97861834 0.79915856]] [[ 0.56804456 0.92559664] [ 0.5488135 0.71518937] [ 0.60276338 0.54488318] [ 0.07103606 0.0871293 ]] [[ 0.96366276 0.38344152] [ 0.43758721 0.891773 ] [ 0.43758721 0.891773 ] [ 0.77815675 0.87001215]] [[ 0.79172504 0.52889492] #出现相同样本出现在同一个batch中 [ 0.79172504 0.52889492] [ 0.60276338 0.54488318] [ 0.4236548 0.64589411]] [[ 0.07103606 0.0871293 ] [ 0.4236548 0.64589411] [ 0.96366276 0.38344152] [ 0.5488135 0.71518937]] [[ 0.97861834 0.79915856] [ 0.0202184 0.83261985] [ 0.77815675 0.87001215] [ 0.56804456 0.92559664]] [[ 0.0202184 0.83261985] [ 0.97861834 0.79915856]] #可以看到最后个batch为2,而前面都是4
使用案例:
def input_fn(filenames, batch_size=32, num_epochs=1, perform_shuffle=False): print('Parsing', filenames) def decode_libsvm(line): #columns = tf.decode_csv(value, record_defaults=CSV_COLUMN_DEFAULTS) #features = dict(zip(CSV_COLUMNS, columns)) #labels = features.pop(LABEL_COLUMN) columns = tf.string_split([line], ' ') labels = tf.string_to_number(columns.values[0], out_type=tf.float32) splits = tf.string_split(columns.values[1:], ':') id_vals = tf.reshape(splits.values,splits.dense_shape) feat_ids, feat_vals = tf.split(id_vals,num_or_size_splits=2,axis=1) feat_ids = tf.string_to_number(feat_ids, out_type=tf.int32) feat_vals = tf.string_to_number(feat_vals, out_type=tf.float32) #feat_ids = tf.reshape(feat_ids,shape=[-1,FLAGS.field_size]) #for i in range(splits.dense_shape.eval()[0]): # feat_ids.append(tf.string_to_number(splits.values[2*i], out_type=tf.int32)) # feat_vals.append(tf.string_to_number(splits.values[2*i+1])) #return tf.reshape(feat_ids,shape=[-1,field_size]), tf.reshape(feat_vals,shape=[-1,field_size]), labels return {"feat_ids": feat_ids, "feat_vals": feat_vals}, labels # Extract lines from input files using the Dataset API, can pass one filename or filename list dataset = tf.data.TextLineDataset(filenames).map(decode_libsvm, num_parallel_calls=10).prefetch(500000) # multi-thread pre-process then prefetch # Randomizes input using a window of 256 elements (read into memory) if perform_shuffle: dataset = dataset.shuffle(buffer_size=256) # epochs from blending together. dataset = dataset.repeat(num_epochs) dataset = dataset.batch(batch_size) # Batch size to use #return dataset.make_one_shot_iterator() iterator = dataset.make_one_shot_iterator() batch_features, batch_labels = iterator.get_next() #return tf.reshape(batch_ids,shape=[-1,field_size]), tf.reshape(batch_vals,shape=[-1,field_size]), batch_labels return batch_features, batch_labels
《魔兽世界》大逃杀!60人新游玩模式《强袭风暴》3月21日上线
暴雪近日发布了《魔兽世界》10.2.6 更新内容,新游玩模式《强袭风暴》即将于3月21 日在亚服上线,届时玩家将前往阿拉希高地展开一场 60 人大逃杀对战。
艾泽拉斯的冒险者已经征服了艾泽拉斯的大地及遥远的彼岸。他们在对抗世界上最致命的敌人时展现出过人的手腕,并且成功阻止终结宇宙等级的威胁。当他们在为即将于《魔兽世界》资料片《地心之战》中来袭的萨拉塔斯势力做战斗准备时,他们还需要在熟悉的阿拉希高地面对一个全新的敌人──那就是彼此。在《巨龙崛起》10.2.6 更新的《强袭风暴》中,玩家将会进入一个全新的海盗主题大逃杀式限时活动,其中包含极高的风险和史诗级的奖励。
《强袭风暴》不是普通的战场,作为一个独立于主游戏之外的活动,玩家可以用大逃杀的风格来体验《魔兽世界》,不分职业、不分装备(除了你在赛局中捡到的),光是技巧和战略的强弱之分就能决定出谁才是能坚持到最后的赢家。本次活动将会开放单人和双人模式,玩家在加入海盗主题的预赛大厅区域前,可以从强袭风暴角色画面新增好友。游玩游戏将可以累计名望轨迹,《巨龙崛起》和《魔兽世界:巫妖王之怒 经典版》的玩家都可以获得奖励。
更新日志
- 【雨果唱片】中国管弦乐《鹿回头》WAV
- APM亚流新世代《一起冒险》[FLAC/分轨][106.77MB]
- 崔健《飞狗》律冻文化[WAV+CUE][1.1G]
- 罗志祥《舞状元 (Explicit)》[320K/MP3][66.77MB]
- 尤雅.1997-幽雅精粹2CD【南方】【WAV+CUE】
- 张惠妹.2007-STAR(引进版)【EMI百代】【WAV+CUE】
- 群星.2008-LOVE情歌集VOL.8【正东】【WAV+CUE】
- 罗志祥《舞状元 (Explicit)》[FLAC/分轨][360.76MB]
- Tank《我不伟大,至少我能改变我。》[320K/MP3][160.41MB]
- Tank《我不伟大,至少我能改变我。》[FLAC/分轨][236.89MB]
- CD圣经推荐-夏韶声《谙2》SACD-ISO
- 钟镇涛-《百分百钟镇涛》首批限量版SACD-ISO
- 群星《继续微笑致敬许冠杰》[低速原抓WAV+CUE]
- 潘秀琼.2003-国语难忘金曲珍藏集【皇星全音】【WAV+CUE】
- 林东松.1997-2039玫瑰事件【宝丽金】【WAV+CUE】