All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Here's what I’ve tested so far. 1: WinSCP uploads file.json to the FTP server → Splunk local server retrieves the file to a local directory → Splunk reads and indexes the data. sha256sum /splunk_lo... See more...
Here's what I’ve tested so far. 1: WinSCP uploads file.json to the FTP server → Splunk local server retrieves the file to a local directory → Splunk reads and indexes the data. sha256sum /splunk_local/file.json 45b01fabce6f2a75742c192143055d33e5aa28be3d2c3ad324dd2e0af5adf8dd 2: Deleted file.json from the FTP server → Used WinSCP to re-upload the same file.json → Splunk local server pulled the file to the local directory → Splunk did not index the file.json sha256sum /splunk_local/file.json 45b01fabce6f2a75742c192143055d33e5aa28be3d2c3ad324dd2e0af5adf8dd 3: WinSCP overwrote file.json on the FTP server with a version containing both new and existing entries → Splunk local server pulled the updated file to the local directory → Splunk re-read and re-indexed the entire file, including previously indexed data sha256sum /splunk_local/file.json 2217ee097b7d77ed4b2eabc695b89e5f30d4e8b85c8cbd261613ce65cda0b851 I noticed that the SHA value only changes when a new entry is added to the file, as seen in scenario 3. However, in scenarios 1 and 2, the SHA value remains the same—even if I delete and re-upload the exact same file to the FTP server and pull it into my local Splunk server. And yes, I'm pulling the file from the FTP server into my local Splunk server, where the file is being monitored.
@ITWhisperer  Data flushing is enabled for the required tables.
Could this be that the file you are monitoring on the database server has not been closed / flushed so the forwarder is unaware of any updates until later?
This is probably because your ftp server is deleting the existing file when you overwrite it so the forwarder sees it as a new file even if it has the same name and content. Try copying the received ... See more...
This is probably because your ftp server is deleting the existing file when you overwrite it so the forwarder sees it as a new file even if it has the same name and content. Try copying the received file on the ftp server to the monitored directory
Salting crc is very very rarely the way to go. Usually it's about the length of the initCrcLength. If your files contain a long "header" which is constant between files, you need to raise its value. ... See more...
Salting crc is very very rarely the way to go. Usually it's about the length of the initCrcLength. If your files contain a long "header" which is constant between files, you need to raise its value. But. Problems with crc duplication manifest themselves with the opposite to what you're getting - data _not_ being indexed at all due to Splunk considering two files the same, not indexing data multiple times.
Hi, I'm facing an issue where the same data gets indexed multiple times every time the JSON file is pulled from the FTP server. Each time the JSON file is retrieved and placed on my local Splunk se... See more...
Hi, I'm facing an issue where the same data gets indexed multiple times every time the JSON file is pulled from the FTP server. Each time the JSON file is retrieved and placed on my local Splunk server, it overwrites the existing file. I don't have control over the content being placed on the FTP server, it could either be an entirely new entry or an existing entry with new data added, as shown below. I'm monitoring a specific file, as its name, type, and path remain consistent. From what I can observe, every time the file has new entries alongside previously indexed data, it is re-indexed, causing duplication. Example: file.json 2024-04-21 14:00 - row 1 2024-04-21 14:10 - row 2 overwritten file.json 2024-04-21 14:00 - row 1 2024-04-21 14:10 - row 2 2024-04-21 14:20 - row 3 Additionally, I checked the sha256sum of the JSON file after it’s pulled into my local Splunk server. The hash value changes before and after the file is overwritten. file.json: 2217ee097b7d77ed4b2eabc695b89e5f30d4e8b85c8cbd261613ce65cda0b851 /home/ws/logs/###.json overwritten file.json: 45b01fabce6f2a75742c192143055d33e5aa28be3d2c3ad324dd2e0af5adf8dd /home/ws/logs//###.json I've tried using initCrcLength, crcSalt, and followTail, but they don't seem to prevent the duplication, as Splunk still indexes it as new data. Any assistance would be appreciated, as I can't seem to prevent the duplication in indexing.
Hello Splunkers!! Issue Description We are experiencing a significant delay in data ingestion (>10 hours) for one index in Project B within our Splunk environment. Interestingly, Project A, which o... See more...
Hello Splunkers!! Issue Description We are experiencing a significant delay in data ingestion (>10 hours) for one index in Project B within our Splunk environment. Interestingly, Project A, which operates with a nearly identical configuration, does not exhibit this issue, and data ingestion occurs as expected. Steps Taken to Diagnose the Issue To identify the root cause of the delayed ingestion in Project B, the following checks were performed: Timezone Consistency: Verified that the timezone settings on the database server (source of the data) and the Splunk server are identical, ruling out timestamp misalignment. Props Configuration: Confirmed that the props.conf settings align with the event patterns, ensuring proper event parsing and processing. System Performance: Monitored CPU performance on the Splunk server and found no resource bottlenecks or excessive load. Note : Configuration Comparison: Conducted a thorough comparison of configurations between Project A and Project B, including inputs, outputs, and indexing settings, and found no apparent differences. Observations The issue is isolated to Project B, despite both projects sharing similar configurations and infrastructure. Project A processes data without delays, indicating that the Splunk environment and database connectivity are generally functional. Screenshot 1 : Screenshot 2 : Event sample : TIMESTAMP="2025-04-17T21:17:05.868000Z",SOURCE="TransportControllerManager_x.onStatusChangedTransferRequest",IDEVENT="1312670",EVENTTYPEKEY="TRFREQ_CANCELLED",INSTANCEID="210002100",OBJECTTYPE="TRANSFERREQUEST",OPERATOR="1",OPERATORID="1",TASKID="10030391534",TSULABEL="309360376000158328" props.conf [wmc_events] CHARSET=AUTO KV_MODE=AUTO SHOULD_LINEMERGE=false description= WMC events received from the Oracle database, formatted as key-value pairs pulldown_type=true TIME_PREFIX = ^TIMESTAMP= TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6NZ TZ = UTC NO_BINARY_CHECK = true TRUNCATE = 10000000 #MAX_EVENTS = 100000 ANNOTATE_PUNCT = false    
Hi @livehybrid,  Previously, I was monitoring a folder path. However, after making some adjustments, I’ve switched to monitoring a specific file instead, since the name, type, and path will always r... See more...
Hi @livehybrid,  Previously, I was monitoring a folder path. However, after making some adjustments, I’ve switched to monitoring a specific file instead, since the name, type, and path will always remain consistent. Now, I'm encountering an issue where the same data gets indexed multiple times whenever the JSON file is pulled from the FTP server. Each time the JSON file is retrieved and placed on my local Splunk server, it overwrites the existing file. I’ve tried using initCrcLength and crcSalt, but they don’t seem to prevent the duplication, as Splunk still indexes it as new data. Additionally, I checked the sha256sum of the JSON file after it’s pulled into my local Splunk server. The hash value changes before and after the new data overwrites the file. I'm not entirely sure how Splunk determines the file’s initial 256-byte hash for comparison. 1: 2217ee097b7d77ed4b2eabc695b89e5f30d4e8b85c8cbd261613ce65cda0b851 /home/ws/logs/cpf_case_final.json 2: 45b01fabce6f2a75742c192143055d33e5aa28be3d2c3ad324dd2e0af5adf8dd /home/ws/logs/cpf_case_final.json
Thank you @livehybrid,  i edited the splunk_metadata.csv and removed the existing entires and added the below key,index,value  and restarted the SC4S : fortinet_fortios_traffic, index, index_new ... See more...
Thank you @livehybrid,  i edited the splunk_metadata.csv and removed the existing entires and added the below key,index,value  and restarted the SC4S : fortinet_fortios_traffic, index, index_new fortinet_fortios_utm, index, index_new That did not work either.  Am i still missing something here ? Also is there a way to change all (netfw,netops,oswin,osnix and so on) the default index to a new single index ?  
x
@livehybrid Thanks for the response. Regex is working fine, and 4 fields are extracted (log_level, request_id, component, message) Are these four fields the only ones for typo3 logs, and should t... See more...
@livehybrid Thanks for the response. Regex is working fine, and 4 fields are extracted (log_level, request_id, component, message) Are these four fields the only ones for typo3 logs, and should this work for every typo3 log format? I did not find an official documentation on typo3 logs format. The message field contains some nested field value pairs as well. In addition, message values have multi-line events as well, so I had to adjust props.conf like this: SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE = ^\w{3},\s+\d+\s+\w+\s+\d{4}\s+\d{2}:\d{2}:\d{2}\s+\+\d{4} Thanks
Hi To re-route logs to a different index in SC4S, you must correctly map the source type to your target index in the splunk_metadata.csv file. The format is: key,index,value Regarding the key name... See more...
Hi To re-route logs to a different index in SC4S, you must correctly map the source type to your target index in the splunk_metadata.csv file. The format is: key,index,value Regarding the key names, you can see these at https://splunk.github.io/splunk-connect-for-syslog/1.91.5/sources/Fortinet/ which are: key sourcetype default index key default index fortinet_fortios_traffic netfw fortinet_fortios_utm netfw fortinet_fortios_event netops fortinet_fortios_log netops   See below for more detail on the splunk_metadata.csv format: The columns in this file are key, metadata, and value. To make a change using the override file, consult the example file (or the source documentation) for the proper key and modify and add rows in the table, specifying one or more of the following metadata/value pairs for a given key: key which refers to the vendor and product name of the data source, using the vendor_product convention. For overrides, these keys are listed in the example file. For new custom sources, be sure to choose a key that accurately reflects the vendor and product being configured and that matches the log path. index to specify an alternate value for index. Check the docs for more info on the format After editing splunk_metadata.csv, you must restart the SC4S container or service for changes to take effect.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Folks, New to Splunk and SC4S deploymenet. So far I have been able to make good progress. I have setup 2 SC4S servers one on linux and the other on windows with WSL. The challenge that I am fac... See more...
Hi Folks, New to Splunk and SC4S deploymenet. So far I have been able to make good progress. I have setup 2 SC4S servers one on linux and the other on windows with WSL. The challenge that I am facing is that all the syslogs are doing to the default indices. For example I see that the FW logs are going to netfw. I am trying to move them to a new index that I have created- index_new. I have tried editing the splunk_metadata.csv file but I still see the logs going to netfw. i have tried different configurations but nothing worked.  fortinet_fortigate,index, index_new or ftnt_fortigate, index,index_new or  netfw,index,index_new In the HEC configuration, I have not selected any index and left it blank. The default index is set to index_new Thank you in advance. PS: I have also tried the Maciek Stopa's posfilter.conf script as well.
Hi @TroyWorkman  There isnt currently a SplunkBase app for Webex Calling with CDR reporting, however there is an API you can utilise to get this info: https://developer.webex.com/docs/api/v1/reports... See more...
Hi @TroyWorkman  There isnt currently a SplunkBase app for Webex Calling with CDR reporting, however there is an API you can utilise to get this info: https://developer.webex.com/docs/api/v1/reports-detailed-call-history/get-detailed-call-history so it should be possible for an app developer to put a simple app together for this, or you might be able to use the API calls via the SplunkBase app "Webtools Add-on" (which can make web requests) to get started to see if the logs are what you need.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @MrGlass , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @ganesanvc  Does "text_search" come from a search result - or is this something like a token you are passing in? I couldnt tell from the request but if its coming from a token and you want to app... See more...
Hi @ganesanvc  Does "text_search" come from a search result - or is this something like a token you are passing in? I couldnt tell from the request but if its coming from a token and you want to apply the additional escaping then you can do this: index=main source="answersDemo" [| makeresults | eval text_search="*\\Test\abc\test\abc\xxx\OUT\*" | eval FileSource=replace(text_search, "\\\\", "\\\\\\\\") | return FileSource ]   Note: I used a sample event in index=main as you can see in the results above using; | windbag | head 1 | eval _raw="Test Event for SplunkAnswers user=Demo FileSource=\"MyFileSystem\\Test\\abc\\test\\abc\\xxx\\OUT\\test.exe\" fileType=exe" | eval source="answersDemo" | collect index=main output_format=hec I may have got the wrong end of the stick with what you're looking for here but let me know!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi There is no official Splunk TA for Typo3, so you need to create a custom sourcetype with appropriate field extractions for your Typo3 logs. Start by identifying the log format (e.g., JSON, key-va... See more...
Hi There is no official Splunk TA for Typo3, so you need to create a custom sourcetype with appropriate field extractions for your Typo3 logs. Start by identifying the log format (e.g., JSON, key-value, plain text) and create custom props.conf and transforms.conf settings to parse the fields. Its a few years since Ive used Typo3 and the only instance I still have running just has apache2 logs however in the Typo3 docs I found the following sample event - is this similar to yours? Fri, 19 Jul 2023 09:45:00 +0100 [WARNING] request="5139a50bee3a1" component="TYPO3.Examples.Controller.DefaultController": Something went awry, check your configuration! If so then the following props/transforms should help get you started: == props.conf == [typo3] SHOULD_LINEMERGE = false # Custom timestamp extraction (day, month, year, time, tz) TIME_PREFIX = ^ TIME_FORMAT = %a, %d %b %Y %H:%M:%S %z TRUNCATE = 10000 # Route event to stanza in transforms.conf for field extractions REPORT-typo3_fields = typo3_field_extractions == transforms.conf == [typo3_field_extractions] # Extract log_level, request id, component, message REGEX = \[([^\]]+)\]\s+request="([^"]+)"\s+component="([^"]+)":\s*(.*)$ FORMAT = log_level::$1 request_id::$2 component::$3 message::$4 Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi, I need recommendations on typo3 logs source type. Be default, I set source type as "typo3" in inputs.conf but logs are not parsed properly. I did not find any Splunk TA for typo3 that can help... See more...
Hi, I need recommendations on typo3 logs source type. Be default, I set source type as "typo3" in inputs.conf but logs are not parsed properly. I did not find any Splunk TA for typo3 that can help in parsing. Anyone have experience onboarding typo3 logs?  Thank you!  
Hi, I'm wondering in Enterprise can I add various dashboards links maybe either on to the main splash screen or onto where the add-ons are listed when users first log in?  Thank you
@yuanliuYou meant RHS, not LHS @ganesanvcI hope you're running this snippet on an already relatively filtered event stream. If you want to use it as an initial search because you're getting the t... See more...
@yuanliuYou meant RHS, not LHS @ganesanvcI hope you're running this snippet on an already relatively filtered event stream. If you want to use it as an initial search because you're getting the text_search parameter from elsewhere (like a token in a dashboard) you might be way better off using a subsearch to create a verbatim search term.