我第一遍写的时候,想法是把字符串作为一个队列输入,当前输入到队列 中的j in 队列时,就把队列中第一个出列,并更新maxlenght。代码如下 :
# @lc code=start
class Solution:
def lengthOfLongestSubstring(self, x: str) -> int:
maxlength = 0
lens = len(x)
for i in range(lens-1):
for j in range(i+1,lens):
if x[j] in x[i:j] or x[j]==x[i]:
maxlength = maxlength if maxlength>(j-i) else (j-i)
break
elif j == lens-1:
maxlength = maxlength if maxlength>(j-i+1) else (j-i+1)
break
else:
continue
return maxlength
# @lc code=end
但是通过率不是100,因为没有考虑空字符串。
Wrong Answer
879/987 cases passed (N/A)
Testcase
" "
可以在开始加一个判断 if ” ” in string
class Solution:
def lengthOfLongestSubstring(self, x: str) -> int:
maxlength = 0
lens = len(x)
if " " in x[0:1] or lens==1:
return 1
for i in range(lens-1):
for j in range(i+1,lens):
if x[j] in x[i:j] or x[j]==x[i]:
maxlength = maxlength if maxlength>(j-i) else (j-i)
break
elif j == lens-1:
maxlength = maxlength if maxlength>(j-i+1) else (j-i+1)
break
else:
continue
return maxlength
Accepted
987/987 cases passed (520 ms)
Your runtime beats 8.75 % of python3 submissions
Your memory usage beats 16.81 % of python3 submissions (15.2 MB)
class Solution:
def lengthOfLongestSubstring(self, s: str) -> int:
# 哈希集合,记录每个字符是否出现过
occ = set()
n = len(s)
# 右指针,初始值为 -1,相当于我们在字符串的左边界的左侧,还没有开始移动
rk, ans = -1, 0
for i in range(n):
if i != 0:
# 左指针向右移动一格,移除一个字符
occ.remove(s[i - 1])
while rk + 1 < n and s[rk + 1] not in occ:
# 不断地移动右指针
occ.add(s[rk + 1])
rk += 1
# 第 i 到 rk 个字符是一个极长的无重复字符子串
ans = max(ans, rk - i + 1)
return ans
哈希Map 只需一次遍历, more efficiency
附 Python 代码
class Solution:
def lengthOfLongestSubstring(self, s: str) -> int:
k, res, c_dict = -1, 0, {}
for i, c in enumerate(s):
if c in c_dict and c_dict[c] > k: # 字符c在字典中 且 上次出现的下标大于当前长度的起始下标
k = c_dict[c]
c_dict[c] = i
else:
c_dict[c] = i
res = max(res, i-k)
return res
首先,类似于 cGAN ,随机产生一些噪声,然后串联图像A以生成伪图像 B ^ ,之后,使用来自 VAE-GAN 的同一编码器将伪图像 B ^ 编码为潜矢量。 最后,从编码的潜矢量中采样 z ^ ,并用输入噪声 z 计算损失。数据流为 z −> B ^ −> z ^ ( 图(d) 中的实线箭头),有两个损失:
对抗损失 \(L_{GAN}\)
噪声 N(z) 与潜在编码之间的 L1损失 \(L_{1}^{latent}\)
通过组合这两个数据流,在输出和潜在空间之间得到了一个双映射循环。 BicycleGAN 中的 bi 来自双映射(双向单射),这是一个数学术语,简单来说其表示一对一映射,并且是可逆的。在这种情况下,BicycleGAN 将输出映射到潜在空间,并且类似地从潜在空间映射到输出。总损失如下:
for i in range(10):
time.sleep(1) #每隔一秒钟打印一次数据
x+=i
y=(y+i)*1.5
print(x,y)
env2.line(
X=np.array([x]),
Y=np.array([y]),
win=pane1,#win参数确认使用哪一个pane
update='append') #我们做的动作是追加
StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
Authors
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, Jaegul Choo
Abstract
Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN’s superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks.