All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@livehybrid Thanks for the response. Regex is working fine, and 4 fields are extracted (log_level, request_id, component, message) Are these four fields the only ones for typo3 logs, and should t... See more...
@livehybrid Thanks for the response. Regex is working fine, and 4 fields are extracted (log_level, request_id, component, message) Are these four fields the only ones for typo3 logs, and should this work for every typo3 log format? I did not find an official documentation on typo3 logs format. The message field contains some nested field value pairs as well. In addition, message values have multi-line events as well, so I had to adjust props.conf like this: SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE = ^\w{3},\s+\d+\s+\w+\s+\d{4}\s+\d{2}:\d{2}:\d{2}\s+\+\d{4} Thanks
Hi To re-route logs to a different index in SC4S, you must correctly map the source type to your target index in the splunk_metadata.csv file. The format is: key,index,value Regarding the key name... See more...
Hi To re-route logs to a different index in SC4S, you must correctly map the source type to your target index in the splunk_metadata.csv file. The format is: key,index,value Regarding the key names, you can see these at https://splunk.github.io/splunk-connect-for-syslog/1.91.5/sources/Fortinet/ which are: key sourcetype default index key default index fortinet_fortios_traffic netfw fortinet_fortios_utm netfw fortinet_fortios_event netops fortinet_fortios_log netops   See below for more detail on the splunk_metadata.csv format: The columns in this file are key, metadata, and value. To make a change using the override file, consult the example file (or the source documentation) for the proper key and modify and add rows in the table, specifying one or more of the following metadata/value pairs for a given key: key which refers to the vendor and product name of the data source, using the vendor_product convention. For overrides, these keys are listed in the example file. For new custom sources, be sure to choose a key that accurately reflects the vendor and product being configured and that matches the log path. index to specify an alternate value for index. Check the docs for more info on the format After editing splunk_metadata.csv, you must restart the SC4S container or service for changes to take effect.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Folks, New to Splunk and SC4S deploymenet. So far I have been able to make good progress. I have setup 2 SC4S servers one on linux and the other on windows with WSL. The challenge that I am fac... See more...
Hi Folks, New to Splunk and SC4S deploymenet. So far I have been able to make good progress. I have setup 2 SC4S servers one on linux and the other on windows with WSL. The challenge that I am facing is that all the syslogs are doing to the default indices. For example I see that the FW logs are going to netfw. I am trying to move them to a new index that I have created- index_new. I have tried editing the splunk_metadata.csv file but I still see the logs going to netfw. i have tried different configurations but nothing worked.  fortinet_fortigate,index, index_new or ftnt_fortigate, index,index_new or  netfw,index,index_new In the HEC configuration, I have not selected any index and left it blank. The default index is set to index_new Thank you in advance. PS: I have also tried the Maciek Stopa's posfilter.conf script as well.
Hi @TroyWorkman  There isnt currently a SplunkBase app for Webex Calling with CDR reporting, however there is an API you can utilise to get this info: https://developer.webex.com/docs/api/v1/reports... See more...
Hi @TroyWorkman  There isnt currently a SplunkBase app for Webex Calling with CDR reporting, however there is an API you can utilise to get this info: https://developer.webex.com/docs/api/v1/reports-detailed-call-history/get-detailed-call-history so it should be possible for an app developer to put a simple app together for this, or you might be able to use the API calls via the SplunkBase app "Webtools Add-on" (which can make web requests) to get started to see if the logs are what you need.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @MrGlass , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @ganesanvc  Does "text_search" come from a search result - or is this something like a token you are passing in? I couldnt tell from the request but if its coming from a token and you want to app... See more...
Hi @ganesanvc  Does "text_search" come from a search result - or is this something like a token you are passing in? I couldnt tell from the request but if its coming from a token and you want to apply the additional escaping then you can do this: index=main source="answersDemo" [| makeresults | eval text_search="*\\Test\abc\test\abc\xxx\OUT\*" | eval FileSource=replace(text_search, "\\\\", "\\\\\\\\") | return FileSource ]   Note: I used a sample event in index=main as you can see in the results above using; | windbag | head 1 | eval _raw="Test Event for SplunkAnswers user=Demo FileSource=\"MyFileSystem\\Test\\abc\\test\\abc\\xxx\\OUT\\test.exe\" fileType=exe" | eval source="answersDemo" | collect index=main output_format=hec I may have got the wrong end of the stick with what you're looking for here but let me know!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi There is no official Splunk TA for Typo3, so you need to create a custom sourcetype with appropriate field extractions for your Typo3 logs. Start by identifying the log format (e.g., JSON, key-va... See more...
Hi There is no official Splunk TA for Typo3, so you need to create a custom sourcetype with appropriate field extractions for your Typo3 logs. Start by identifying the log format (e.g., JSON, key-value, plain text) and create custom props.conf and transforms.conf settings to parse the fields. Its a few years since Ive used Typo3 and the only instance I still have running just has apache2 logs however in the Typo3 docs I found the following sample event - is this similar to yours? Fri, 19 Jul 2023 09:45:00 +0100 [WARNING] request="5139a50bee3a1" component="TYPO3.Examples.Controller.DefaultController": Something went awry, check your configuration! If so then the following props/transforms should help get you started: == props.conf == [typo3] SHOULD_LINEMERGE = false # Custom timestamp extraction (day, month, year, time, tz) TIME_PREFIX = ^ TIME_FORMAT = %a, %d %b %Y %H:%M:%S %z TRUNCATE = 10000 # Route event to stanza in transforms.conf for field extractions REPORT-typo3_fields = typo3_field_extractions == transforms.conf == [typo3_field_extractions] # Extract log_level, request id, component, message REGEX = \[([^\]]+)\]\s+request="([^"]+)"\s+component="([^"]+)":\s*(.*)$ FORMAT = log_level::$1 request_id::$2 component::$3 message::$4 Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi, I need recommendations on typo3 logs source type. Be default, I set source type as "typo3" in inputs.conf but logs are not parsed properly. I did not find any Splunk TA for typo3 that can help... See more...
Hi, I need recommendations on typo3 logs source type. Be default, I set source type as "typo3" in inputs.conf but logs are not parsed properly. I did not find any Splunk TA for typo3 that can help in parsing. Anyone have experience onboarding typo3 logs?  Thank you!  
Hi, I'm wondering in Enterprise can I add various dashboards links maybe either on to the main splash screen or onto where the add-ons are listed when users first log in?  Thank you
@yuanliuYou meant RHS, not LHS @ganesanvcI hope you're running this snippet on an already relatively filtered event stream. If you want to use it as an initial search because you're getting the t... See more...
@yuanliuYou meant RHS, not LHS @ganesanvcI hope you're running this snippet on an already relatively filtered event stream. If you want to use it as an initial search because you're getting the text_search parameter from elsewhere (like a token in a dashboard) you might be way better off using a subsearch to create a verbatim search term.
However, when I try to use the text_search_escaped variable inside search, I get no results. Splunk's search command can only use field name like text_search_escaped on the left-hand side.  If... See more...
However, when I try to use the text_search_escaped variable inside search, I get no results. Splunk's search command can only use field name like text_search_escaped on the left-hand side.  If you want to use a field's value, where is your friend.  For example, you can say | eval text_search="%\\Test\abc\test\abc\xxx\OUT\%" | eval text_search_escaped=replace(text_search, "\\\\", "\\\\\\\\") | where FileSource LIKE text_search_escaped
Thanks for noticing it.  Its my bad. its working as expected.
Please help in identifying where I am going wrong How about spelling error? | makeresults | eval a="27 Mar 2025,02:14:11" | eval b="27 Mar 2025,03:14:12" | eval stime=strptime(a,"%d %b %Y,%H:%M... See more...
Please help in identifying where I am going wrong How about spelling error? | makeresults | eval a="27 Mar 2025,02:14:11" | eval b="27 Mar 2025,03:14:12" | eval stime=strptime(a,"%d %b %Y,%H:%M:%S") | eval etime=strptime(b,"%d %b %Y,%H:%M:%S") | eval diff = etime - stime | table a b stime etime diff a b stime etime diff 27 Mar 2025,02:14:11 27 Mar 2025,03:14:12 1743066851.000000 1743070452.000000 3601.000000
I need to calculate time difference between start and end times. But I get the difference value as null. Not sure what I am missing. Below is the sample query | makeresults | eval a="27 Mar 2025,0... See more...
I need to calculate time difference between start and end times. But I get the difference value as null. Not sure what I am missing. Below is the sample query | makeresults | eval a="27 Mar 2025,02:14:11" | eval b="27 Mar 2025,03:14:12" | eval stime=strptime(a,"%d %b %Y,%H:%M:%S") | eval etime=strptime(b,"%d %b %Y,%H:%M:%S") | eval diff = eTime - sTime | table a b stime etime diff I get the below result with diff value empty: a b stime etime diff 27 Mar 2025,02:14:11 27 Mar 2025,03:14:12 1743041651.000000 1743045252.000000     Please help in identifying where I am going wrong
Looks like I was not completely true. It's more complicated. But there is a way: From transforms.conf.spec file: NOTE: For KV Store lookups, a setting of 'case_sensitive_match=false' is honored o... See more...
Looks like I was not completely true. It's more complicated. But there is a way: From transforms.conf.spec file: NOTE: For KV Store lookups, a setting of 'case_sensitive_match=false' is honored only when the data in the KV Store lookup table is entirely in lower case. The input data can be in any case. Context: case_sensitive_match = <boolean> * If set to true, Splunk software performs case sensitive matching for all fields in a lookup table. * If set to false, Splunk software performs case insensitive matching for all fields in a lookup table. * NOTE: For KV Store lookups, a setting of 'case_sensitive_match=false' is honored only when the data in the KV Store lookup table is entirely in lower case. The input data can be in any case. * For case sensitive field matching in reverse lookups see reverse_lookup_honor_case_sensitive_match. * Default: true    
KV Store supports only case sensitive content: https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/DefineaKVStorelookupinSplunkWeb Tomas
We saw this problem with a customer deployment as well.  It turned out that a different admin, not the main admin who was usually on the box, had set Splunkd some time ago to only be run as a certain... See more...
We saw this problem with a customer deployment as well.  It turned out that a different admin, not the main admin who was usually on the box, had set Splunkd some time ago to only be run as a certain domain user rather than as system. The msi's upgrade at the end restarts splunk but I guess it ends up restarting as the user who ran the msi, so it fails.    Another clue was that restarting splunkd on the command line,  by the administrator user,  failed with "splunk stopped" as the only output. tacking launchsplunk=0 onto the msi invocation was the answer ultimately.  and then the admins set Splunk back to just run as System so it wouldn't cause any unexpected problems going forward
Two small points:  1. I would avoid the usage of the /lib directory in app code. It was intended to work around an issue that no longer exists (outside of persistent custom REST endpoints) and cause... See more...
Two small points:  1. I would avoid the usage of the /lib directory in app code. It was intended to work around an issue that no longer exists (outside of persistent custom REST endpoints) and causes additional issues for extension points that distribute resources to search peers (i.e. certain types of custom search commands and external lookups) - you will need to update .conf files to make sure that the /lib directory is distributed correctly. There are no advantages to using /lib over /bin and /bin is automatically distributed to search peers as required.  2. Similarly, the guidance to do import manipulation using sys.path.insert is also outdated, as it does not prevent import collisions within the context of persistent custom REST endpoints).    e: In the context of a scripted input it shouldn't matter either way. I just want to make sure it's understood where the /lib guidance came from and why it is out of date today.  I'm working to get the old guidance removed from dev.splunk.com and examples on github - appreciate your patience in the meantime. 
Hi @livehybrid  The windbag command worked just fine, but the collect command did not work. How do I use collect command in the Splunk report that appended |summaryindex automatically? Perhaps... See more...
Hi @livehybrid  The windbag command worked just fine, but the collect command did not work. How do I use collect command in the Splunk report that appended |summaryindex automatically? Perhaps screenshot below will explain better. Thank you for your help I have a Splunk report that generates summary index daily The search query will be index=summary report=json_test When the report run daily, the search will be appended with "| summary index" command below: | windbag | head 1 | eval _raw="{\"name\":\"John Doe\",\"age\":30,\"address\":{\"street\":\"123 Main St\",\"city\":\"Anytown\",\"state\":\"CA\",\"zip\":\"12345\"},\"interests\":[\"reading\",\"hiking\",\"coding\"]}" | summaryindex spool=t uselb=t addtime=t index="summary" file="RMD[random characters].stash_new" name="json_test" marker="hostname=\"https://aa.test.com/\",report=\"json_test\"    
Hi @LearningGuy  Yes you can use output_mode=hec - see below: | windbag | head 1 | eval _raw="{\"name\":\"John Doe\",\"age\":30,\"address\":{\"street\":\"123 Main St\",\"city\":\"Anytown\",\"stat... See more...
Hi @LearningGuy  Yes you can use output_mode=hec - see below: | windbag | head 1 | eval _raw="{\"name\":\"John Doe\",\"age\":30,\"address\":{\"street\":\"123 Main St\",\"city\":\"Anytown\",\"state\":\"CA\",\"zip\":\"12345\"},\"interests\":[\"reading\",\"hiking\",\"coding\"]}" | eval source="answersDemo" | collect index=main output_format=hec Then when I search index=main source=answersDemo: Note - you need to ensure you have the run_collect capability for your role and also access to the index you are collecting in to.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing