Page 1 of 2 12 LastLast
Results 1 to 30 of 48

Thread: Standard for compression libraries API

  1. #1
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    735
    Thanked 660 Times in 354 Posts

    Standard API for compression libraries

    current version of proposal is kept at http://haskell.org/haskellwiki/FreeA...sion_libraries
    source code: http://www.haskell.org/bz/cls.zip


    the main reason of freearc success is its use of leading compression algorithms. but not every great algorithm is open-source that forces advanced users to rely on "external compressors" feature, that isn't super-handy

    external programs has advantage of being absolutely independent of me. everyone can develop compressor that will be usable standalone and at the same time easily integrated with FA while adding new algorithms to FA needs co-operation with me. now i think that by providing the same level of independence for compressors developed as dll we can make things better

    so that i propose: standard API for compression dlls. once you have dll developed according to this API, you can just drop it to the FreeArc folder (or any other program supporting this standard) and immediately use it for compression and decompression. moreover, it will be possible to download-on-demand dlls required to decompress your archive just like now it's done in meda players


    my proposal is based on experience of approving various algorithms for FA. it's highly flexible to allow further extensions w/o losing backward compatibility, at the same time i tried to simplify basic operations

    1) library should be provided in dll with name cls-*.dll: it makes smpler to find all compatible libs in the large directory

    2) the only function that should be exported is

    int ClsMain(CALLBACK* cb, void* instance)

    where

    typedef int CALLBACK(char *what, void* instance, void *ptr, int n)

    3) whole interaction with caller implemented via callbacks. string 'what' describes operation what we ask to perform, instane allows to pass instantiation-specific parameters (important for multithreadung environments), while ptr and n are used to pass operation parameters. Operations requiring more params can use ptr as pointer to structure

    4) the minimum set of operations, that should be supported, consists of:

    cb("action", instance, buf, len) - puts "compress" or "decompress" in buf. required to determine what operation ClsMain should perform

    cb("read", instance, buf, len) - allows to read input data into buf. returns
    >0 - amount of data read
    =0 - EOF
    <0 - errorcode

    cb("write", instance, buf, len) - the same for writing data

    compression methods supporting multiple output streams (such as bcj2) may add stream number to read or write:
    cb("write0", instance, buf, len)
    cb("write1", instance, buf, len)
    ...

    the following action may be used to determine compression parameters:
    cb("parameters", instance, buf, len) - puts string representing compression parameters into buf


    that's all for beginning. one interesting idea may be implemenatation of code that turns such ClsMain into standalone compressor. i.e. some standard shell with all those file/error/crc/cmdline mangling so that developer can focus on writing just compression code itself. this code may interact either with dlls or statically link with ClsMain-style library
    Last edited by Bulat Ziganshin; 30th September 2008 at 18:53.

  2. #2
    Member chornobyl's Avatar
    Join Date
    May 2008
    Location
    ua/kiev
    Posts
    153
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Well it seems not just good solution
    It's more like revolutionary
    Download needed DLLs to decompress file - imho for archiver this is first time ever.

    FA perform tasks of mutithreading, memory allocaton, buffered output
    and the only job for dll is de/compress - this simplify dll design considerably.

    Good Luck Bulat

  3. #3
    Member
    Join Date
    May 2008
    Location
    brazil
    Posts
    163
    Thanks
    0
    Thanked 3 Times in 3 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    the main reason of freearc success is its use of leading compression algorithms. but not every great algorithm is open-source that forces advanced users to rely on "external compressors" feature, that isn't super-handy

    external programs has advantage of being absolutely independent of me. everyone can develop compressor that will be usable standalone and at the same time easily integrated with FA while adding new algorithms to FA needs co-operation with me. now i think that by providing the same level of independence for compressors developed as dll we can make things better

    so that i propose: standard API for compression dlls. once you have dll developed according to this API, you can just drop it to the FreeArc folder (or any other program supporting this standard) and immediately use it for compression and decompression. moreover, it will be possible to download-on-demand dlls required to decompress your archive just like now it's done in meda players


    my proposal is based on experience of approving various algorithms for FA. it's highly flexible to allow further extensions w/o losing backward compatibility, at the same time i tried to simplify basic operations

    1) library should be provided in dll with name cls-*.dll: it makes smpler to find all compatible libs in the large directory

    2) the only function that should be exported is

    int ClsMain(CALLBACK* cb, void* instance)

    where

    typedef int CALLBACK(char *what, void* instance, void *ptr, int n)

    3) whole interaction with caller implemented via callbacks. string 'what' describes operation what we ask to perform, instane allows to pass instantiation-specific parameters (important for multithreadung environments), while ptr and n are used to pass operation parameters. Operations requiring more params can use ptr as pointer to structure

    4) the minimum set of operations, that should be supported, consists of:

    cb("action", instance, buf, len) - puts "compress" or "decompress" in buf. required to determine what operation ClsMain should perform

    cb("read", instance, buf, len) - allows to read input data into buf. returns
    >0 - amount of data read
    =0 - EOF
    <0 - errorcode

    cb("write", instance, buf, len) - the same for writing data

    compression methods supporting multiple output streams (such as bcj2) may add stream number to read or write:
    cb("write0", instance, buf, len)
    cb("write1", instance, buf, len)
    ...

    the following action may be used to determine compression parameters:
    cb("parameters", instance, buf, len) - puts string representing compression parameters into buf


    that's all for beginning. one interesting idea may be implemenatation of code that turns such ClsMain into standalone compressor. i.e. some standard shell with all those file/error/crc/cmdline mangling so that developer can focus on writing just compression code itself. this code may interact either with dlls or statically link with ClsMain-style library
    Thanks Bulat!

    It's very important because the api will eliminate the command line dependence. And then the gui will acess the lib directly.

    Finally the gui independence is coming!

  4. #4
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by chornobyl View Post
    Well it seems not just good solution
    It's more like revolutionary
    Download needed DLLs to decompress file - imho for archiver this is first time ever.
    It's quite usual.
    Squeez, Total Commander, Directory Opus, (?)Izarc, possibly many more.

  5. #5
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Bulat, why do you use strings?
    I don't see any advantage over ints.
    Drawbacks are slower parsing (yeah, I know it doesn't matter, but why not?) and problems for unicode programs.

  6. #6
    Member
    Join Date
    May 2008
    Location
    brazil
    Posts
    163
    Thanks
    0
    Thanked 3 Times in 3 Posts
    Advantages:

    More FAst (eliminates comunication frontend-backend)
    More independent - no more the old command line interce
    Less dissonance - the frontend do something and backend do another thing.
    GUI flexibility - the GUI does not need to "mimic" the command line for communication .

    Disadvantages:
    Less secure
    A bit more complex

    Windows and macintosh do this .GNU Linux not.

  7. #7
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by lunaris View Post
    Advantages:

    More FAst (eliminates comunication frontend-backend)
    More independent - no more the old command line interce
    Less dissonance - the frontend do something and backend do another thing.
    GUI flexibility - the GUI does not need to "mimic" the command line for communication .

    Disadvantages:
    Less secure
    A bit more complex

    Windows and macintosh do this .GNU Linux not.
    I'm not writing about advantages of having a plugin interface. Just about a minor implementation detail - why does Bulat use strings to parse parameters while ints seem clearly better?

  8. #8
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,239
    Thanks
    192
    Thanked 968 Times in 501 Posts
    I'd use a function pointer table instead of ClsMain and callback and text ids.
    It better has to be protected with crc (and version id etc) too.
    Also I think that interface design requires some further analysis.
    Like, what about detectors and filters, or having several interface
    classes in a single dll?
    Also there're threading issues (like, application allowing to use up to N
    threads) and whether dll is thread-safe or not (if not, it can be secured
    by loading multiple instances of dll - might be a useful feature as many
    experimental compressors are not really incapsulated).
    Also memory allocation should better be done by application's callbacks.
    Some interface methods are required for initialization and model flush
    (which are not the same as there might be some precalculation required
    only once).
    Etc etc.
    One point I'm especially interested in is interface alternatives for
    codec i/o. Like, allowing a CM compressor to output an array
    of bits and probabilities, leaving the entropy coder choice to
    application - this is very important as stopping a usual rc
    implementation on buffer overflow is usually quite complicated.
    And another alternative might be useful for LZ-likes and preprocessors -
    some alphabet extensions. What I mean is that LZ can allocate some
    alphabets and then return a string of extended "symbols" with alphabet
    selectors - which again can be entropy coded by some other module.
    Last edited by Shelwien; 30th September 2008 at 07:43.

  9. The Following User Says Thank You to Shelwien For This Useful Post:

    Bulat Ziganshin (17th February 2017)

  10. #9
    Member
    Join Date
    May 2008
    Location
    brazil
    Posts
    163
    Thanks
    0
    Thanked 3 Times in 3 Posts
    Quote Originally Posted by m^2 View Post
    I'm not writing about advantages of having a plugin interface. Just about a minor implementation detail - why does Bulat use strings to parse parameters while ints seem clearly better?
    I know

    More advantages:

    A common and simple develpment environment for apps
    A common access for all compressors
    Multiple compression techniques in a single file.

  11. #10
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    735
    Thanked 660 Times in 354 Posts
    thanks for responses to all and especially Eugeny

    strings simplify to add new variants by independent vendors but i don't expect too much activity here, so we can skip to ints


    >One point I'm especially interested in is interface alternatives for
    codec i/o.

    i think you see too far into future. even now, with simple proposal, i'm not sure that this initiative will be supported by other developers. anyway, you can provide such API as an extension to this proposal, isn't?

    >I'd use a function pointer table instead of ClsMain and callback and text ids.

    if you make just function list {read, write...} it will be hard to extend. if you will make smth like {{READ_OP, read}, {WRITE_OP,write}...} - i don't see too much difference. drawbacks of tables is inability to handle unsuported methods, inability to process whole ranges of numbers (such as READ_OP..READ_OP+99), inability to easily make "fallbacks" like this:

    callback(...)
    { if action==GET_NUM_THREADS then return 2;
    else prev_callback(..)
    }

    so i think that one ClsMain and one callback is a bit more flexible ans easier to implement. alkthough difference isn't too much and tables are more natural, well-known way for this. and - don't foget about alingment problems



    >It better has to be protected with crc (and version id etc) too.

    yes, we should add features to check for forward/backward compatibility and identify coders

    >Also I think that interface design requires some further analysis.
    >Like, what about detectors and filters, or having several interface
    classes in a single dll?

    i think that it's not prohibited by this design. when dll is loaded, ClsMain shoykld be called with INIT_OP - at this moment it can ask caller about environment and declare all its features via this general callback

    >Also there're threading issues (like, application allowing to use up to N
    threads) and whether dll is thread-safe or not (if not, it can be secured
    by loading multiple instances of dll - might be a useful feature as many
    experimental compressors are not really incapsulated).

    again it may be done at INIT_OP phase

    >Also memory allocation should better be done by application's callbacks.

    agree

    >Some interface methods are required for initialization and model flush
    (which are not the same as there might be some precalculation required
    only once).

    the one ClsMain call performs one independent compression or decompression operation (one call = one solid block). if you compress many independent files usinmglzma or ppmd, it will be great to alloc all the memory at start and return it at end but adding these features will make API even more complex. are you have simple and light idea?





    so, next version of the API:
    Code:
      action:
    init
    done
    compress
    decompress
    
      callback:
    parameters  ptr, n   returns method params in the form like "d32m:fb128:mc256"
    read+N      ptr, n   reads next input block into buffer
    write+N     ptr, n
    malloc      n        allocs memory area
    free        ptr
    
      errcode (numbers<0):
    general              unclassified errors  
    not implemented      for operations/parameters that are not supported
    no memory            memory allocation failed
    it still doesn't provide ways to
    - identify methods (by CLSID or url-including string). we can make callback ID from INIT call to identify itself
    - check versions and backward/forward compatibility
    - allow multiple codecs in same dll. this may be solved by ClsMain2, ClsMain3... exported but this may be not enough for some more complex scenarios

  12. #11
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Codecs need to know how much memory for compression / decompression are they supposed to use.
    Also, some kind of properties might be useful, i.e. some codecs might handle multithreading in a smarter way than splitting streams.

  13. #12
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    735
    Thanked 660 Times in 354 Posts
    i've made a first source-code implementation - look at http://haskell.org/haskellwiki/FreeA...sion_libraries . current version of proposal will be kept there so you can add your operations by editing the wiki (it's a kind of code repository )

    ps: also avail as http://www.haskell.org/bz/cls.zip
    Last edited by Bulat Ziganshin; 30th September 2008 at 16:36.

  14. #13
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    735
    Thanked 660 Times in 354 Posts
    i follow idea that simple things should be made simple and complex things should be possible. so i try to keep to minimum amount of ops that are mandatory to implement at both sides. in current proposal these are: compress and decompress for codec and read/write/malloc/free for host. everything else is optional to implement, and errcode NOT IMPLEMENTED should be returned for such ops

    so, if someone goes to quickly implement some compression algorithm, he doesn't need to describe memreqs, ids, versions and so on. it's easy to start hacking just by copying 10-line minimal example and adding your compression code there. OTOH when you are go to publish your codec, it's better to add more info so it will be a "good citizen" inside complex systems. the same holds true for host apps - you may not need to control memory, multithreading and versioning since you use one and only compressor

  15. #14
    Member
    Join Date
    Sep 2008
    Location
    Texas
    Posts
    14
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by m^2
    It's quite usual.
    Squeez, Total Commander, Directory Opus, (?)Izarc, possibly many more.
    Maybe, but most of the programs that offer that functionality are full explorer replacements, and Izarc, while a great program, is really limited in the custom compression of it's supported formats, you cannot add engines, and it's engine implementation results in slower compression and larger files (7zip for example). The implementation in FreeArc is revolutionary in that you can use any compression engine or combination thereoff, so any compressor with API dll can be used. No other program i have seen will be able to offer the flexibility and customization of this program.

    HookEm!!

  16. #15
    Member
    Join Date
    May 2008
    Location
    brazil
    Posts
    163
    Thanks
    0
    Thanked 3 Times in 3 Posts
    And even the apps which is not specialized in compression can use this api.

  17. #16
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    i follow idea that simple things should be made simple and complex things should be possible. so i try to keep to minimum amount of ops that are mandatory to implement at both sides. in current proposal these are: compress and decompress for codec and read/write/malloc/free for host. everything else is optional to implement, and errcode NOT IMPLEMENTED should be returned for such ops

    so, if someone goes to quickly implement some compression algorithm, he doesn't need to describe memreqs, ids, versions and so on. it's easy to start hacking just by copying 10-line minimal example and adding your compression code there. OTOH when you are go to publish your codec, it's better to add more info so it will be a "good citizen" inside complex systems. the same holds true for host apps - you may not need to control memory, multithreading and versioning since you use one and only compressor
    I feel that memory requirements are a must...at least FA should give a huge red warning with flashing lights that decompression might be impossible on a machine with little memory.

  18. #17
    Programmer toffer's Avatar
    Join Date
    May 2008
    Location
    Erfurt, Germany
    Posts
    587
    Thanks
    0
    Thanked 0 Times in 0 Posts
    I think this is really a great step into the right direction with FA and will allow easy integration for algorithm developers. Great work, Bulat.

    One thing which came to my mind is - would it be/is it possible to cascade algorithms to allow custom filters and preprocessors with that interface? FreeArc already has that stuff built in - but could one create custom (de)compression pipelines with such an interface?

    Greets!

  19. #18
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    735
    Thanked 660 Times in 354 Posts
    this architecture is already used inside inside FA. i just run separate thread for every algorithm involved so writing from one algoriyhm provides data for reading on the next one. you may even kidnap C++ code for doing this from CompressionLibrary.cpp (multi_decompress). so of course i will support filters too, moreover even filetrs with sevral outputs will be eventually supported (API alows this, i just need to add buffereing scheme)

    >memory requirements are a must..

    it's already supported in FA and will be supported in new scheme too. above i just wrote about *minimal* reqs to the codec. the idea is to make codec development as simple as possible - you just need to write compress and decompress functions that reads and writes data via provided callback. and then you can use all the FA power with your new codec. when you are ready to distribute it, you add support for other things - getting/setting memreqs and so on

    and codecs shipped with fa will support all the advanced features, of course

    >And even the apps which is not specialized in compression can use this api.

    yes, now all apps use gzip/bzip2 libs since they use simple well-known api. it will be great to provide universal API for all future compression libs so apps can select best codec only by technical reasons and easily switch to another one. and of course we can make this API very close to zlib/bzip2 one (by using separate thread)

  20. #19
    Member
    Join Date
    May 2008
    Location
    brazil
    Posts
    163
    Thanks
    0
    Thanked 3 Times in 3 Posts
    Similar project, but envolving archives ,not compressors.

    The libarchive is a simple support for BSD tar,gzip and other unix types.It's a GNU Tar for BSDs.

    Libarchive is a programming library that can create and read several different streaming archive formats, including most popular tar variants, several cpio formats, and both BSD and GNU ar variants. It can also write shar archives and read ISO9660 CDROM images and ZIP archives. The bsdtar program is an implementation of tar(1) that is built on top of libarchive. It started as a test harness, but has grown into a feature-competitive replacement for GNU tar. The bsdcpio program is an implementation of cpio(1) that is built on top of libarchive.

  21. #20
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    735
    Thanked 660 Times in 354 Posts
    this looks more similar to 7z.dll which provides access to various archive formats. what i propose is *API* which allows to cooperate developers of codecs and hosts

  22. #21
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    735
    Thanked 660 Times in 354 Posts
    i've uploaded new version of http://haskell.org/bz/arc1.arc that includes support for CLS (basic functionality allowing to compress/decompress and get method parameters). usage example:

    arc a archive -m=test -t

    where compression method "test" implemented via cls-test.dll. Directory _CLS contains all the files required to easily build your own compression DLLs - see readme.txt. Please try!

    I'm also developing more advanced version of API which will support all the FreeArc features (memory limiting and so on) and provide high-level C++ API. you may find this half-done in cls.h and complex-codec.cpp

    ps: btw, who can propose better name for the API instead of CLS?

  23. #22
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    735
    Thanked 660 Times in 354 Posts
    please write which external compressors you want to see implemented as dlls. this may serve as a hint to their developers. for me, most interesting things are ppmonstr, precomp, packjpg and everything written by Christian Martelock

  24. #23
    Member
    Join Date
    May 2008
    Location
    brazil
    Posts
    163
    Thanks
    0
    Thanked 3 Times in 3 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    please write which external compressors you want to see implemented as dlls. this may serve as a hint to their developers. for me, most interesting things are ppmonstr, precomp, packjpg and everything written by Christian Martelock
    Edit

    Compapi already is used. For windowz



    toffer has cmm is very good and m1 + optimizers are free.
    Last edited by lunaris; 12th October 2008 at 02:48.

  25. #24
    Member
    Join Date
    May 2008
    Location
    Earth
    Posts
    115
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by lunaris View Post
    Thanks Bulat!
    Finally the gui independence is coming!
    FreeArc standard GUI executable, freearc.exe, DOES NOT USE CONSOLE PROGRAM arc.exe FOR ANY PURPOSE. It's just variant with GUI capabilities built in.

  26. #25
    Member
    Join Date
    May 2008
    Location
    brazil
    Posts
    163
    Thanks
    0
    Thanked 3 Times in 3 Posts
    Quote Originally Posted by IsName View Post
    FreeArc standard GUI executable, freearc.exe, DOES NOT USE CONSOLE PROGRAM arc.exe FOR ANY PURPOSE. It's just variant with GUI capabilities built in.
    I know ,this is already explained, but FA uses a lot of external compressors.

  27. #26
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    please write which external compressors you want to see implemented as dlls. this may serve as a hint to their developers. for me, most interesting things are ppmonstr, precomp, packjpg and everything written by Christian Martelock
    LZSS.

  28. #27
    Member
    Join Date
    May 2008
    Location
    brazil
    Posts
    163
    Thanks
    0
    Thanked 3 Times in 3 Posts
    Hey bulat

    If possible ,create Delta, mm,tornado,dict and tta plugins for testing in other compressors.


    Skibinski's wrt plugin.Matt preprocessor too.

  29. #28
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    735
    Thanked 660 Times in 354 Posts
    tor:1 = lzss if anyone need it

    yes, i plan to move all my algos to CLS and probably move them into separate dlls. it will be faster if someone will say that he plan to write CLS host

  30. #29
    Member
    Join Date
    May 2008
    Location
    Antwerp , country:Belgium , W.Europe
    Posts
    487
    Thanks
    1
    Thanked 3 Times in 3 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    tor:1 = lzss if anyone need it

    yes, i plan to move all my algos to CLS and probably move them into separate dlls. it will be faster if someone will say that he plan to write CLS host
    What about "Nanozip.dll" ?

  31. #30
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    tor:1 = lzss if anyone need it

    yes, i plan to move all my algos to CLS and probably move them into separate dlls. it will be faster if someone will say that he plan to write CLS host
    I must have been tired when I looked at tor parameters for the last (and only) time, I missed compression level entirely.
    I just made some quick tests - it decompresses really fast, compresses too, but size is bigger. I guess that decompression stub is gonna be bigger too.

Page 1 of 2 12 LastLast

Similar Threads

  1. StuffIt Standard v9
    By encode in forum Download Area
    Replies: 0
    Last Post: 7th May 2008, 20:15

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •