Skip to content

Commit

Permalink
Squashed commit of the following:
Browse files Browse the repository at this point in the history
commit 6a734cd
Merge: 45205e2 d2d35b6
Author: wangdan-fit2cloud <79562285+wangdan-fit2cloud@users.noreply.github.com>
Date:   Fri May 10 14:57:47 2024 +0800

    Merge pull request 1Panel-dev#415 from 1Panel-dev/pr@main@fix-bug

    fix: 文档设置相似度问题修复

commit d2d35b6
Author: wangdan-fit2cloud <dan.wang@fit2cloud.com>
Date:   Fri May 10 14:55:47 2024 +0800

    fix: 文档设置相似度问题修复

commit 45205e2
Merge: d724a54 19057b1
Author: wangdan-fit2cloud <79562285+wangdan-fit2cloud@users.noreply.github.com>
Date:   Fri May 10 14:15:32 2024 +0800

    Merge pull request 1Panel-dev#414 from 1Panel-dev/pr@main@peaf-login

    perf: 优化登录页面

commit 19057b1
Author: wangdan-fit2cloud <dan.wang@fit2cloud.com>
Date:   Fri May 10 14:13:44 2024 +0800

    perf: 优化登录页面

commit d724a54
Merge: 18861b4 61a0b2c
Author: wangdan-fit2cloud <79562285+wangdan-fit2cloud@users.noreply.github.com>
Date:   Fri May 10 11:06:37 2024 +0800

    Merge pull request 1Panel-dev#411 from 1Panel-dev/pr@main@fix-bug

    fix: 修复版本号翻译问题

commit 61a0b2c
Author: wangdan-fit2cloud <dan.wang@fit2cloud.com>
Date:   Fri May 10 11:05:04 2024 +0800

    fix: 修复版本号翻译问题

commit 18861b4
Author: shaohuzhang1 <80892890+shaohuzhang1@users.noreply.github.com>
Date:   Thu May 9 20:45:14 2024 +0800

    fix: 修复上传文档懒加载无法加载第二页 (1Panel-dev#407)

commit b123f0f
Author: shaohuzhang1 <80892890+shaohuzhang1@users.noreply.github.com>
Date:   Thu May 9 19:54:25 2024 +0800

    perf: 优化文档分段前端懒加载 (1Panel-dev#406)

commit cf4ce7e
Merge: 7f30d03 8038385
Author: wangdan-fit2cloud <79562285+wangdan-fit2cloud@users.noreply.github.com>
Date:   Thu May 9 18:02:47 2024 +0800

    Merge pull request 1Panel-dev#405 from 1Panel-dev/pr@main@perf-cross_domain

    跨域地址过滤空行

commit 8038385
Author: wangdan-fit2cloud <dan.wang@fit2cloud.com>
Date:   Thu May 9 17:58:21 2024 +0800

    perf: 跨域地址过滤空行

commit 5838a4f
Author: wangdan-fit2cloud <dan.wang@fit2cloud.com>
Date:   Thu May 9 17:46:57 2024 +0800

    perf: 分段预览懒加载

commit 7f30d03
Author: shaohuzhang1 <80892890+shaohuzhang1@users.noreply.github.com>
Date:   Thu May 9 15:55:35 2024 +0800

    fix: 修复分词超过数据库最大限制 (1Panel-dev#401)

commit 8159fef
Merge: 3fb6192 61819fd
Author: wangdan-fit2cloud <79562285+wangdan-fit2cloud@users.noreply.github.com>
Date:   Thu May 9 10:27:17 2024 +0800

    Merge pull request 1Panel-dev#397 from 1Panel-dev/pr@main@fix-bug

    fix: 修复批量迁移问题

commit 61819fd
Author: wangdan-fit2cloud <dan.wang@fit2cloud.com>
Date:   Thu May 9 10:24:38 2024 +0800

    fix: 修复批量迁移问题

commit 3fb6192
Author: shaohuzhang1 <80892890+shaohuzhang1@users.noreply.github.com>
Date:   Wed May 8 18:46:58 2024 +0800

    fix: 跨域失效 (1Panel-dev#394)

commit 69e39f5
Author: shaohuzhang1 <80892890+shaohuzhang1@users.noreply.github.com>
Date:   Wed May 8 17:40:01 2024 +0800

    feat: 导入时支持设置分段标题为关联问题(1Panel-dev#177) (1Panel-dev#392)

commit 4da8b1b
Author: shaohuzhang1 <80892890+shaohuzhang1@users.noreply.github.com>
Date:   Wed May 8 17:31:56 2024 +0800

    feat: 直接回答支持设置相似度值(1Panel-dev#371)

commit 267be44
Author: shaohuzhang1 <80892890+shaohuzhang1@users.noreply.github.com>
Date:   Wed May 8 17:13:13 2024 +0800

    feat: 跨域设置(1Panel-dev#276)

commit d4e742f
Author: shaohuzhang1 <80892890+shaohuzhang1@users.noreply.github.com>
Date:   Wed May 8 10:40:15 2024 +0800

    feat: 分段管理支持批量迁移,删除分段 1Panel-dev#113,1Panel-dev#103

commit 8204d5f
Merge: 48496bc 23ed472
Author: wangdan-fit2cloud <79562285+wangdan-fit2cloud@users.noreply.github.com>
Date:   Tue May 7 17:09:20 2024 +0800

    Merge pull request 1Panel-dev#381 from 1Panel-dev/pr@main@fix-bugs

    fix: 修复快捷修改组件问题和优化支持一键清空输入内容

commit 23ed472
Author: wangdan-fit2cloud <dan.wang@fit2cloud.com>
Date:   Tue May 7 17:06:39 2024 +0800

    fix: 修复快捷修改组件问题和优化可一键删除输入内容

commit 48496bc
Merge: 7a08f03 1e67f39
Author: wangdan-fit2cloud <79562285+wangdan-fit2cloud@users.noreply.github.com>
Date:   Tue May 7 16:34:43 2024 +0800

    Merge pull request 1Panel-dev#380 from 1Panel-dev/pr@main@perf-migrate

    perf: 优化迁移文档过滤知识库选项

commit 1e67f39
Author: wangdan-fit2cloud <dan.wang@fit2cloud.com>
Date:   Tue May 7 16:30:10 2024 +0800

    perf: 优化迁移文档过滤知识库选项

commit 7a08f03
Author: shaohuzhang1 <80892890+shaohuzhang1@users.noreply.github.com>
Date:   Tue May 7 13:54:13 2024 +0800

    fix: 创建新用户时用户名含有 0 时校验不通过 1Panel-dev#358 (1Panel-dev#375)

commit 77d71f9
Author: shaohuzhang1 <80892890+shaohuzhang1@users.noreply.github.com>
Date:   Tue May 7 13:50:39 2024 +0800

    fix: 修改文档向量化重试,文档状态处理 (1Panel-dev#373)

commit c1b6ec6
Author: shaohuzhang1 <80892890+shaohuzhang1@users.noreply.github.com>
Date:   Tue May 7 12:18:03 2024 +0800

    fix: 导入文档中含有特殊字符时,导入失败。 1Panel-dev#363 (1Panel-dev#372)

commit 7d62842
Author: shaohuzhang1 <80892890+shaohuzhang1@users.noreply.github.com>
Date:   Tue May 7 10:34:38 2024 +0800

    fix: ui文件打包报错 1Panel-dev#348 (1Panel-dev#368)
  • Loading branch information
cuongnn-smartosc committed May 12, 2024
1 parent 0945911 commit 7ebe9fa
Show file tree
Hide file tree
Showing 45 changed files with 1,286 additions and 347 deletions.
10 changes: 8 additions & 2 deletions apps/application/chat_pipeline/I_base_chat_pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ class ParagraphPipelineModel:

def __init__(self, _id: str, document_id: str, dataset_id: str, content: str, title: str, status: str,
is_active: bool, comprehensive_score: float, similarity: float, dataset_name: str, document_name: str,
hit_handling_method: str):
hit_handling_method: str, directly_return_similarity: float):
self.id = _id
self.document_id = document_id
self.dataset_id = dataset_id
Expand All @@ -32,6 +32,7 @@ def __init__(self, _id: str, document_id: str, dataset_id: str, content: str, ti
self.dataset_name = dataset_name
self.document_name = document_name
self.hit_handling_method = hit_handling_method
self.directly_return_similarity = directly_return_similarity

def to_dict(self):
return {
Expand All @@ -56,6 +57,7 @@ def __init__(self):
self.document_name = None
self.dataset_name = None
self.hit_handling_method = None
self.directly_return_similarity = 0.9

def add_paragraph(self, paragraph):
if isinstance(paragraph, Paragraph):
Expand Down Expand Up @@ -83,6 +85,10 @@ def add_hit_handling_method(self, hit_handling_method):
self.hit_handling_method = hit_handling_method
return self

def add_directly_return_similarity(self, directly_return_similarity):
self.directly_return_similarity = directly_return_similarity
return self

def add_comprehensive_score(self, comprehensive_score: float):
self.comprehensive_score = comprehensive_score
return self
Expand All @@ -98,7 +104,7 @@ def build(self):
self.paragraph.get('status'),
self.paragraph.get('is_active'),
self.comprehensive_score, self.similarity, self.dataset_name,
self.document_name, self.hit_handling_method)
self.document_name, self.hit_handling_method, self.directly_return_similarity)


class IBaseChatPipelineStep:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -138,8 +138,8 @@ def get_stream_result(message_list: List[BaseMessage],
if paragraph_list is None:
paragraph_list = []
directly_return_chunk_list = [AIMessageChunk(content=paragraph.content)
for paragraph in paragraph_list if
paragraph.hit_handling_method == 'directly_return']
for paragraph in paragraph_list if (
paragraph.hit_handling_method == 'directly_return' and paragraph.similarity >= paragraph.directly_return_similarity)]
if directly_return_chunk_list is not None and len(directly_return_chunk_list) > 0:
return iter(directly_return_chunk_list), False
elif len(paragraph_list) == 0 and no_references_setting.get(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ def reset_paragraph(paragraph: Dict, embedding_list: List) -> ParagraphPipelineM
.add_dataset_name(paragraph.get('dataset_name'))
.add_document_name(paragraph.get('document_name'))
.add_hit_handling_method(paragraph.get('hit_handling_method'))
.add_directly_return_similarity(paragraph.get('directly_return_similarity'))
.build())

@staticmethod
Expand Down Expand Up @@ -81,7 +82,10 @@ def list_paragraph(embedding_list: List, vector):
vector.delete_by_paragraph_id(paragraph_id)
# If there is a direct return, the item is returned directly.
hit_handling_method_paragraph = [paragraph for paragraph in paragraph_list if
paragraph.get('hit_handling_method') == 'directly_return']
(paragraph.get(
'hit_handling_method') == 'directly_return' and BaseSearchDatasetStep.get_similarity(
paragraph, embedding_list) >= paragraph.get(
'directly_return_similarity'))]
if len(hit_handling_method_paragraph) > 0:
# Find the highest rating.
return [sorted(hit_handling_method_paragraph,
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Generated by Django 4.1.13 on 2024-05-08 13:57

import django.contrib.postgres.fields
from django.db import migrations, models


class Migration(migrations.Migration):

dependencies = [
('application', '0005_alter_chat_abstract_alter_chatrecord_answer_text'),
]

operations = [
migrations.AddField(
model_name='applicationapikey',
name='allow_cross_domain',
field=models.BooleanField(default=False, verbose_name='是否允许跨域'),
),
migrations.AddField(
model_name='applicationapikey',
name='cross_domain_list',
field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(blank=True, max_length=128), default=list, size=None, verbose_name='跨域列表'),
),
]
18 changes: 11 additions & 7 deletions apps/application/models/api_key_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,15 @@


class ApplicationApiKey(AppModelMixin):
id = models.UUIDField(primary_key=True, max_length=128, default=uuid.uuid1, editable=False, verbose_name="The key.id")
secret_key = models.CharField(max_length=1024, verbose_name="The Secret Key", unique=True)
user = models.ForeignKey(User, on_delete=models.CASCADE, verbose_name="Usersid")
application = models.ForeignKey(Application, on_delete=models.CASCADE, verbose_name="Applicationsid")
is_active = models.BooleanField(default=True, verbose_name="is opened.")
id = models.UUIDField(primary_key=True, max_length=128, default=uuid.uuid1, editable=False, verbose_name="PrimaryId")
secret_key = models.CharField(max_length=1024, verbose_name="Secret Key", unique=True)
user = models.ForeignKey(User, on_delete=models.CASCADE, verbose_name="User Id")
application = models.ForeignKey(Application, on_delete=models.CASCADE, verbose_name="Application Id")
is_active = models.BooleanField(default=True, verbose_name="Is Active")
allow_cross_domain = models.BooleanField(default=False, verbose_name="Whether cross-domain is allowed")
cross_domain_list = ArrayField(verbose_name="Cross-domain list",
base_field=models.CharField(max_length=128, blank=True)
, default=list)

class Meta:
db_table = "application_api_key"
Expand All @@ -31,7 +35,7 @@ class ApplicationAccessToken(AppModelMixin):
"""
Applied certificationtoken
"""
application = models.OneToOneField(Application, primary_key=True, on_delete=models.CASCADE, verbose_name="Applicationsid")
application = models.OneToOneField(Application, primary_key=True, on_delete=models.CASCADE, verbose_name="Application Id")
access_token = models.CharField(max_length=128, verbose_name="User Open Access Certificationtoken", unique=True)
is_active = models.BooleanField(default=True, verbose_name="Opening public access.")
access_num = models.IntegerField(default=100, verbose_name="Number of Visits")
Expand All @@ -47,7 +51,7 @@ class Meta:

class ApplicationPublicAccessClient(AppModelMixin):
id = models.UUIDField(max_length=128, primary_key=True, verbose_name="Public access link clientid")
application = models.ForeignKey(Application, on_delete=models.CASCADE, verbose_name="Applicationsid")
application = models.ForeignKey(Application, on_delete=models.CASCADE, verbose_name="Application Id")
access_num = models.IntegerField(default=0, verbose_name="Number of visits.")
intraday_access_num = models.IntegerField(default=0, verbose_name="Number of visits on that day.")

Expand Down
32 changes: 21 additions & 11 deletions apps/application/serializers/application_serializers.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@
from embedding.models import SearchMode
from setting.models import AuthOperate
from setting.models.model_management import Model
from setting.models_provider.constants.model_provider_constants import ModelProvideConstants
from setting.serializers.provider_serializers import ModelSerializer
from smartdoc.conf import PROJECT_DIR

Expand Down Expand Up @@ -583,6 +582,15 @@ def list(self, with_valid=True):
class Edit(serializers.Serializer):
is_active = serializers.BooleanField(required=False, error_messages=ErrMessage.boolean("Is Available"))

allow_cross_domain = serializers.BooleanField(required=False,
error_messages=ErrMessage.boolean("是否允许跨域"))

cross_domain_list = serializers.ListSerializer(required=False,
child=serializers.CharField(required=True,
error_messages=ErrMessage.char(
"跨域列表")),
error_messages=ErrMessage.char("跨域地址"))

class Operate(serializers.Serializer):
application_id = serializers.UUIDField(required=True, error_messages=ErrMessage.uuid("Applicationsid"))

Expand All @@ -599,15 +607,17 @@ def delete(self, with_valid=True):
def edit(self, instance, with_valid=True):
if with_valid:
self.is_valid(raise_exception=True)
ApplicationSerializer.Edit(data=instance).is_valid(raise_exception=True)

ApplicationSerializer.ApplicationKeySerializer.Edit(data=instance).is_valid(raise_exception=True)
api_key_id = self.data.get("api_key_id")
application_id = self.data.get('application_id')
application_api_key = QuerySet(ApplicationApiKey).filter(id=api_key_id,
application_id=application_id).first()
if application_api_key is None:
raise AppApiException(500, '不存在')
if 'is_active' in instance and instance.get('is_active') is not None:
api_key_id = self.data.get("api_key_id")
application_id = self.data.get('application_id')
application_api_key = QuerySet(ApplicationApiKey).filter(id=api_key_id,
application_id=application_id).first()
if application_api_key is None:
raise AppApiException(500, 'There is no')

application_api_key.is_active = instance.get('is_active')
application_api_key.save()
if 'allow_cross_domain' in instance and instance.get('allow_cross_domain') is not None:
application_api_key.allow_cross_domain = instance.get('allow_cross_domain')
if 'cross_domain_list' in instance and instance.get('cross_domain_list') is not None:
application_api_key.cross_domain_list = instance.get('cross_domain_list')
application_api_key.save()
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@ SELECT
paragraph.*,
dataset."name" AS "dataset_name",
"document"."name" AS "document_name",
"document"."hit_handling_method" AS "hit_handling_method"
"document"."hit_handling_method" AS "hit_handling_method",
"document"."directly_return_similarity" as "directly_return_similarity"
FROM
paragraph paragraph
LEFT JOIN dataset dataset ON dataset."id" = paragraph.dataset_id
Expand Down
9 changes: 6 additions & 3 deletions apps/application/swagger_api/application_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,9 +99,12 @@ def get_request_body_api():
type=openapi.TYPE_OBJECT,
required=[],
properties={
'is_active': openapi.Schema(type=openapi.TYPE_BOOLEAN, title="is activated.",
description="is activated."),

'is_active': openapi.Schema(type=openapi.TYPE_BOOLEAN, title="Is Active",
description="whether to activate"),
'allow_cross_domain': openapi.Schema(type=openapi.TYPE_BOOLEAN, title="Whether cross-domain is allowed",
description="Whether cross-domain is allowed"),
'cross_domain_list': openapi.Schema(type=openapi.TYPE_ARRAY, title='Cross-domain list',
items=openapi.Schema(type=openapi.TYPE_STRING))
}
)

Expand Down
25 changes: 21 additions & 4 deletions apps/common/event/listener_manage.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,15 @@ def __init__(self, problem_id: str, problem_content: str):


class UpdateEmbeddingDatasetIdArgs:
def __init__(self, source_id_list: List[str], target_dataset_id: str):
self.source_id_list = source_id_list
def __init__(self, paragraph_id_list: List[str], target_dataset_id: str):
self.paragraph_id_list = paragraph_id_list
self.target_dataset_id = target_dataset_id


class UpdateEmbeddingDocumentIdArgs:
def __init__(self, paragraph_id_list: List[str], target_document_id: str, target_dataset_id: str):
self.paragraph_id_list = paragraph_id_list
self.target_document_id = target_document_id
self.target_dataset_id = target_dataset_id


Expand Down Expand Up @@ -213,13 +220,23 @@ def update_problem(args: UpdateProblemArgs):

@staticmethod
def update_embedding_dataset_id(args: UpdateEmbeddingDatasetIdArgs):
VectorStore.get_embedding_vector().update_by_source_ids(args.source_id_list,
{'dataset_id': args.target_dataset_id})
VectorStore.get_embedding_vector().update_by_paragraph_ids(args.paragraph_id_list,
{'dataset_id': args.target_dataset_id})

@staticmethod
def update_embedding_document_id(args: UpdateEmbeddingDocumentIdArgs):
VectorStore.get_embedding_vector().update_by_paragraph_ids(args.paragraph_id_list,
{'document_id': args.target_document_id,
'dataset_id': args.target_dataset_id})

@staticmethod
def delete_embedding_by_source_ids(source_ids: List[str]):
VectorStore.get_embedding_vector().delete_by_source_ids(source_ids, SourceType.PROBLEM)

@staticmethod
def delete_embedding_by_paragraph_ids(paragraph_ids: List[str]):
VectorStore.get_embedding_vector().delete_by_paragraph_ids(paragraph_ids)

@staticmethod
def delete_embedding_by_dataset_id_list(source_ids: List[str]):
VectorStore.get_embedding_vector().delete_by_dataset_id_list(source_ids)
Expand Down
39 changes: 39 additions & 0 deletions apps/common/middleware/cross_domain_middleware.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# coding=utf-8
"""
@project: maxkb
@Author:虎
@file: cross_domain_middleware.py
@date:2024/5/8 13:36
@desc:
"""
from django.db.models import QuerySet
from django.http import HttpResponse
from django.utils.deprecation import MiddlewareMixin

from application.models.api_key_model import ApplicationApiKey


class CrossDomainMiddleware(MiddlewareMixin):

def process_request(self, request):
if request.method == 'OPTIONS':
return HttpResponse(status=200,
headers={
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET,POST,DELETE,PUT",
"Access-Control-Allow-Headers": "Origin,X-Requested-With,Content-Type,Accept,Authorization,token"})

def process_response(self, request, response):
auth = request.META.get('HTTP_AUTHORIZATION')
origin = request.META.get('HTTP_ORIGIN')
if auth is not None and str(auth).startswith("application-") and origin is not None:
application_api_key = QuerySet(ApplicationApiKey).filter(secret_key=auth).first()
if application_api_key.allow_cross_domain:
response['Access-Control-Allow-Methods'] = 'GET,POST,DELETE,PUT'
response[
'Access-Control-Allow-Headers'] = "Origin,X-Requested-With,Content-Type,Accept,Authorization,token"
if application_api_key.cross_domain_list is None or len(application_api_key.cross_domain_list) == 0:
response['Access-Control-Allow-Origin'] = "*"
elif application_api_key.cross_domain_list.__contains__(origin):
response['Access-Control-Allow-Origin'] = origin
return response
15 changes: 11 additions & 4 deletions apps/common/util/ts_vecto_util.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
from typing import List

import jieba
import jieba.posseg
from jieba import analyse

from common.util.split_model import group_by
Expand All @@ -25,7 +26,9 @@
word_pattern_list = [r"v\d+.\d+.\d+",
r"[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}"]

remove_chars = '\n , :\'<>!@#¥%……&*()!@#$%^&*(): ;,/"./-'
remove_chars = '\n , :\'<>!@#¥%……&*()!@#$%^&*(): ;,/"./'

jieba_remove_flag_list = ['x', 'w']


def get_word_list(text: str):
Expand Down Expand Up @@ -81,9 +84,13 @@ def to_ts_vector(text: str):
word_dict = to_word_dict(word_list, text)
# Replace the string.
text = replace_word(word_dict, text)
# The word
result = jieba.tokenize(text, mode='search')
result_ = [{'word': get_key_by_word_dict(item[0], word_dict), 'index': item[1]} for item in result]
# Participle
filter_word = jieba.analyse.extract_tags(text, topK=100)
result = jieba.lcut(text, HMM=True, use_paddle=True)
# Filter punctuation
result = [item for item in result if filter_word.__contains__(item) and len(item) < 10]
result_ = [{'word': get_key_by_word_dict(result[index], word_dict), 'index': index} for index in
range(len(result))]
result_group = group_by(result_, lambda r: r['word'])
return " ".join(
[f"{key.lower()}:{','.join([str(item['index'] + 1) for item in result_group[key]][:20])}" for key in
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Generated by Django 4.1.13 on 2024-05-08 16:43

from django.db import migrations, models


class Migration(migrations.Migration):

dependencies = [
('dataset', '0003_document_hit_handling_method'),
]

operations = [
migrations.AddField(
model_name='document',
name='directly_return_similarity',
field=models.FloatField(default=0.9, verbose_name='直接回答相似度'),
),
]
1 change: 1 addition & 0 deletions apps/dataset/models/data_set.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ class Document(AppModelMixin):
hit_handling_method = models.CharField(verbose_name='Method of Treatment', max_length=20,
choices=HitHandlingMethod.choices,
default=HitHandlingMethod.optimization)
directly_return_similarity = models.FloatField(verbose_name='直接回答相似度', default=0.9)

meta = models.JSONField(verbose_name="The data", default=dict)

Expand Down

0 comments on commit 7ebe9fa

Please sign in to comment.