Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The files on the mounted point are not deleted after the TTL expires #5527

Open
liangyihan opened this issue Apr 24, 2024 · 0 comments
Open

Comments

@liangyihan
Copy link

Describe the bug
Create a bucket named "test2" with a TTL (Time To Live) of 1 minute. Upload files into the bucket. After the TTL expires, verify that the files no longer exist inside the bucket. However, the files on the mounted point are still present.

upload file
curl -F file=@Readme.txt "http://10.xxx.xx.x:9283/home/hdpu/lyh/mount_dir/test2/20240424/Readme1.txt"
curl -F file=@Readme.txt "http://10.xxx.xx.x:9283/home/hdpu/lyh/mount_dir/test2/20240424/Readme2.txt"
curl -F file=@Readme.txt "http://10.xxx.xx.x:9283/home/hdpu/lyh/mount_dir/test2/20240424/Readme3.txt"

System Setup

  • List the command line to start "weed master", "weed volume", "weed filer", "weed s3", "weed mount".

./weed -logdir=/data/obs/hdd/log/master master -mdir=/data/obs/ssd/data/master -peers=10.xxx.xx.x:9666 -ip=10.xxx.xx.x -port=9666 -defaultReplication=000 -volumePreallocate -volumeSizeLimitMB=80

/weed -logdir=/data/obs/hdd/log/volume volume -index=leveldb -idleTimeout=30 -dir=/data/obs/ssd/data/volume,/data/obs/hdd/data/volume,/data/obs/warm/data/volume,/data/obs/cold/data/volume -disk=ssd,hdd,warm,cold -mserver=10.xxx.xx.x:9666 -dataCenter=dc -rack=rack1 -ip=10.xxx.xx.x -port=9189 -concurrentDownloadLimitMB=2048 -max=0

./weed -logdir=/data/obs/hdd/log/filer filer -master=10.xxx.xx.x:9666 -ip=10.xxx.xx.x -port=9283 -defaultReplicaPlacement=000 -maxMB=4 -s3 -s3.allowEmptyFolder -encryptVolumeData -s3.port=9383 -iam -iam.port=9483

./weed mount -cacheCapacityMB=0 -filer=10.xxx.xx.x:9283 -volumeServerAccess=filerProxy -dir=/home/hdpu/lyh/mount_dir/ -filer.path=/home/hdpu/lyh/mount_dir/

  • OS version centos7
  • output of weed version seaweedfs 3.15
  • if using filer, show the content of filer.toml
    [cassandra]
    enabled = true
    keyspace = "seaweedfs"
    hosts = [
    "10.xxx.xx.x:9042",
    ]
    username = ""
    password = ""
    This changes the data layout. Only add new directories. Removing/Updating will cause data loss.
    superLargeDirectories = []
    Name of the datacenter local to this filer, used as host selection fallback.
    localDC = ""

Expected behavior
The files on the mounted point have been deleted.

Screenshots
If applicable, add screenshots to help explain your problem.
image

image

Additional context
Add any other context about the problem here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant