All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We have data similar to below and are looking to created a stacked timechart, however setting the stackmode does not seem to have any impact on the chart timestamp System Value TIME1 SYS1 VALUE1.... See more...
We have data similar to below and are looking to created a stacked timechart, however setting the stackmode does not seem to have any impact on the chart timestamp System Value TIME1 SYS1 VALUE1.1 TIME1 SYS2 VALUE2.1 TIME1 SYS3 VALUE3.1 TIME1 SYS4 VALUE4.1 TIME2 SYS1 VALUE1.2 TIME2 SYS2 VALUE2.2 TIME2 SYS3 VALUE3.2 TIME2 SYS4 VALUE4.2 timechart latest(Value) by System <option name="charting.chart.stackMode">stacked</option>
Hi, can anyone help me with the solution please. I have wineventlog as below. By default it considering the whitespace while parsing the fieldname. For eg: it should extract the field name as "Prov... See more...
Hi, can anyone help me with the solution please. I have wineventlog as below. By default it considering the whitespace while parsing the fieldname. For eg: it should extract the field name as "Provider Name", but instead it is extracting the field name as "Name". It considering whitespace and extracting the filename. Similarly I have many fields as highlighted below. please guide me where I have to make such change to get the correct field names. Sample Log: <Event xmlns='http://XXX.YYYY.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{12345-1111-2222-a5ba-XXX}'/><EventID>2222</EventID><Version>0</Version><Level>0</Level><Task>12345</Task><Opcode>0</Opcode><Keywords>1110000000000000</Keywords><TimeCreated SystemTime='2024-07-24T11:36:15.892441300Z'/><EventRecordID>0123456789</EventRecordID><Correlation ActivityID='{11aa2222-abc2-0001-0002-XXXX1122}'/><Execution ProcessID='111' ThreadID='111'/><Channel>Security</Channel><Computer>YYY.xxx.com</Computer><Security/></System><EventData><Data Name='MemberName'>-</Data><Data Name='MemberSid'>CORP\gpininfra-svcaccounts</Data><Data Name='TargetUserName'>Administrators</Data><Data Name='TargetDomainName'>Builtin</Data><Data Name='TargetSid'>BUILTIN\Administrators</Data><Data Name='SubjectUserSid'>NT AUTHORITY\SYSTEM</Data><Data Name='SubjectUserName'>xyz$</Data><Data Name='SubjectDomainName'>CORP</Data><Data Name='SubjectLogonId'>1A2B</Data><Data Name='PrivilegeList'>-</Data></EventData></Event>
Hi Rajesh, It was the http configuration on my controller, as soon as I changed to https and re-deploy my cluster-agent, started reporting to my controller. Thanks for the help and patience. H... See more...
Hi Rajesh, It was the http configuration on my controller, as soon as I changed to https and re-deploy my cluster-agent, started reporting to my controller. Thanks for the help and patience. Have a great day! Regards Gustavo Marconi
OK so this size doesn't look like it should give you a problem, so it is possibly down to your actual data. Does it fail for all values of id? Are there other fields that you could try adding instead... See more...
OK so this size doesn't look like it should give you a problem, so it is possibly down to your actual data. Does it fail for all values of id? Are there other fields that you could try adding instead of count_err which might work? Can you break down the problem further to try and isolate the issue?
Are you sure your lookahead is big enough? I haven't counted exactly but your event seems close to exceeding that 650 characters mark before reaching the timestamp. Also - have you verified your TIM... See more...
Are you sure your lookahead is big enough? I haven't counted exactly but your event seems close to exceeding that 650 characters mark before reaching the timestamp. Also - have you verified your TIMESTAMP_PREFIX? That capture group looks strange and you have a very strange lookbehind which seems to not do what you think it should do. Verify it on regex101.com
This looks awfully close to a part of a json structure inserted as a string field in another json structure. It is bad on at least two levels. 1) Embedding json as escaped string prevents it from b... See more...
This looks awfully close to a part of a json structure inserted as a string field in another json structure. It is bad on at least two levels. 1) Embedding json as escaped string prevents it from being properly parsed by Splunk 2) Extracting from structured data with regexes is asking for trouble
1. CM does not manage SHC. CM manages indexer cluster. Deployer (not deployment server!) is used to push configuration to SHC 2. As @Tom_Lundie said - you don't add inputs using GUI on SHC. In fact,... See more...
1. CM does not manage SHC. CM manages indexer cluster. Deployer (not deployment server!) is used to push configuration to SHC 2. As @Tom_Lundie said - you don't add inputs using GUI on SHC. In fact, you shouldn't use SHC to run inputs. Even in a smaller environment you shouldn't run inputs on a standalone SH - that's what HFs are for.
Your recovery event doesn't seem to match the rex pattern you are applying to it. Are there other recovery events which do match? Do you want to ignore the recovery events which don't match the rex p... See more...
Your recovery event doesn't seem to match the rex pattern you are applying to it. Are there other recovery events which do match? Do you want to ignore the recovery events which don't match the rex pattern? P.S. You can leave the transaction command in if you like but I don't see what value it is giving you because all the information for the event appears to be in the single event (and therefore the transaction command is just wasting time and resources?).
What have you tried so far?  What error do you get?  Are you trying to extract the field at index-time or search-time? Have you tried this rex command in your search? | rex "SourceIp\\\\\\":\\\\\\"... See more...
What have you tried so far?  What error do you get?  Are you trying to extract the field at index-time or search-time? Have you tried this rex command in your search? | rex "SourceIp\\\\\\":\\\\\\"(?<SourceIp>[\d\.]+)"
So you didn't "find something else that helped".  You used my answer.
I don't understand the reply.  Did my answer work or not?  If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
Hi @Tom_Lundie , I am checking is there anyways we can download sandboxing result as pdf. Regards, Harisha
Hi, If you are facing a specific error then please post it here. Otherwise if you just need general guidance then I would start with the documentation: Create a new playbook in Splunk SOAR (Cloud) ... See more...
Hi, If you are facing a specific error then please post it here. Otherwise if you just need general guidance then I would start with the documentation: Create a new playbook in Splunk SOAR (Cloud) - Splunk Documentation
Hi, This is by design, the problem with running modular inputs on the SHC layer is that if all of the nodes in the cluster attempt to run the input you would get duplicated data and all sorts of pro... See more...
Hi, This is by design, the problem with running modular inputs on the SHC layer is that if all of the nodes in the cluster attempt to run the input you would get duplicated data and all sorts of problems. Splunk seem to be actively developing a solution for this but do not officially support at the time of writing. That being said, a handful of apps do have official support (e.g. Splunk DB Connect). These seem to rely on the run_only_one directive in inputs.conf to ensure they only run on the captain node to prevent duplication. Unless your TA has official support for a deployment on a SHC, I would recommend using a separate, dedicated instance for input collection such as a Heavy Forwarder.
hi  5 columns and 79 rows
Below is the search query for icinga Problem and events too.   Below is the search query for Icinga Recovery and events.     If you want me to get rid of transaction command, thats fine.... See more...
Below is the search query for icinga Problem and events too.   Below is the search query for Icinga Recovery and events.     If you want me to get rid of transaction command, thats fine. I would like to group multiple events into a single meta-event that represents a single physical event.
Hello i want to extract ip field from a log but i give error. this is a part of my log: ",\"SourceIp\":\"10.10.6.0\",\"N i want 10.10.6.0 as a field. can you help me?  
Hello I am building an app using the Splunk Add-on builder.  Can I use the helper.new_event method in order to send a metric to the metrics index?  If yes, what should be the format of the "event"... See more...
Hello I am building an app using the Splunk Add-on builder.  Can I use the helper.new_event method in order to send a metric to the metrics index?  If yes, what should be the format of the "event" ?    Kind regards,
@Tom_Lundie  Thanks for the response. We have already configured in Splunk soar, and I am not able to download as CSV, Jason,PCAP,STIX. But requirement is to get all results(including screensh... See more...
@Tom_Lundie  Thanks for the response. We have already configured in Splunk soar, and I am not able to download as CSV, Jason,PCAP,STIX. But requirement is to get all results(including screenshot) as pdf. Please let me know if you have any suggestion on this
How i can display the data sum of 2 fields like Last month same date data (example: 24 june and 24 may) I have tried the below query i was getting the data but how i can show in a manner. index=gc... See more...
How i can display the data sum of 2 fields like Last month same date data (example: 24 june and 24 may) I have tried the below query i was getting the data but how i can show in a manner. index=gc source=apps | eval AMT=if(IND="DR", BASE_AMT*-1, BASE_AMT) | eval GLBL1=if(FCR="DR", GLBL*-1, GLBL) | eval DATE="20".substr(REC_DATE,1,2).substr(REC_DATE,3,2).substr(REC_DATE,5,2) | eval current_pdate_4=strftime(relative_time(now(), "-30d@d"),"%Y%m%d") | where DATE = current_pdate_4 | stats sum(AMT) as w4AMT, sum(GLBL1) as w4FEE_AMT by DATE id |append [search index=gc source=apps | eval AMT=if(IND="DR", BASE_AMT*-1, BASE_AMT) | eval GLBL1=if(FCR="DR", GLBL*-1, GLBL) | eval DATE="20".substr(REC_DATE,1,2).substr(REC_DATE,3,2).substr(REC_DATE,5,2) | eval current_pdate_3=strftime(relative_time(now(), "-@d"),"%Y%m%d") | where DATE = current_pdate_3 | stats sum(AMT) as w3AMT, sum(GLBL1) as w3FEE_AMT by DATE id | table DATE, id w3AMT, w4AMT, w4FEE_AMT w3FEE_AMT | rename Date as currentDATE, w3AMT as currentdata, w3FEE_AMT as currentamt w4AMT as lastmonthdate w4FEE_AMT as lastmonthdateamt DATE, id currentdata lastmonthdate currentamt lastmonthdateamt 20240723 2 2323 2123 23 24 20240723 3 2423 2123 23 24 20240723 4 2223 2123 23 24 20240723 5 2323 2123 23 24 20240723 6 2329 2123 23 24 20240723 7 2323 2123 23 24