All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @blanky  It can get pretty complicated trying to extract two different timestamp formats from the same sourcetype - but it isnt impossible. You could try something like this: == transforms.conf... See more...
Hi @blanky  It can get pretty complicated trying to extract two different timestamp formats from the same sourcetype - but it isnt impossible. You could try something like this: == transforms.conf == [yourSourcetype] TRANSFORM-overwriteTime = overwriteTime == props.conf == [overwriteTime] INGEST_EVAL = _time=coalesce(strptime(substr(_raw,0,25),"%Y-%m-%d %H:%M:%S"),_time) This would try and extract the time using the format provided out of the first 25 characters of the _raw event (adjust accordingly) and if that fails it falls back on _time previously determined).  This allows you to overwrite the _time extraction for your other data. You can develop this further depending on the various events coming in if necessary. For more context on this check out Richard Morgan's fantastic props/transforms examples at https://github.com/silkyrich/ingest_eval_examples/blob/master/default/transforms.conf#L9 For time format variables see https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Commontimeformatvariables  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
We are collecting various data from security equipment. The data is being stored in index=sec_A and received as sourtype=A_syslog. Here, in the props.conf setting, several data are filtered as ... See more...
We are collecting various data from security equipment. The data is being stored in index=sec_A and received as sourtype=A_syslog. Here, in the props.conf setting, several data are filtered as follows, and the data is stored by dividing it into different source types and indexes. [A_syslog] TRANSFORMS-<class_A> = a, b, c, d TRANSFORMS-<class_B> = e, f, g Here, I want to add additional data to be filtered by b, but these data are different from the data currently being collected and timestamp REGEX, so I think I need to collect them in a different way. Is there a way to specify a different timestamp value only for the data being added while the data collection is continuing?
My macro looks like this [|makeresults count=0] | append [ search `mymacro` | rex --- | rex --- | rex --- | eval -- | eval --- | fields _time, -,-] | lookup --- | lookup --- | lookup --- ... See more...
My macro looks like this [|makeresults count=0] | append [ search `mymacro` | rex --- | rex --- | rex --- | eval -- | eval --- | fields _time, -,-] | lookup --- | lookup --- | lookup --- | search --- ---------------------------------- I'm building a scheduled alert which runs this macro using earliest and latest time period earliest="04/11/2025:12:10:01" latest="04/11/2025:12:20:01" `mymacro` | table _time IP So this time range is not passing within above macro subquery which is nested.   Hope this give you more info.
I know this is old, but it's the first Google result. A workaround is Calculated fields can use |eval lookup(). The lookup must be a .csv file on the search head(s).   https://docs.splunk.com/Docu... See more...
I know this is old, but it's the first Google result. A workaround is Calculated fields can use |eval lookup(). The lookup must be a .csv file on the search head(s).   https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/definecalcfields https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/ConditionalFunctions
A workaround is Calculated fields can use |eval lookup(). The lookup must be a .csv file on the search head(s).   https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/definecalcfields htt... See more...
A workaround is Calculated fields can use |eval lookup(). The lookup must be a .csv file on the search head(s).   https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/definecalcfields https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/ConditionalFunctions
aha, glad I asked; I now understand it.   the $ShowLiveIndex$ and $ShowSummaryIndex$ in search query puts in "" ... .the double quote signs, which technically means nothing.....and hence it works  ... See more...
aha, glad I asked; I now understand it.   the $ShowLiveIndex$ and $ShowSummaryIndex$ in search query puts in "" ... .the double quote signs, which technically means nothing.....and hence it works    thanks a lot This will help me a lot to make one Dashboard Dual, Live Query and Summary Index switchable rather than 2 versions as we were keeping Super thanks again   
You haven't followed the logic, you are using the wrong tokens in your searches. Try something like this <input type="dropdown" token="indextypeboss" searchWhenChanged="true"> <label>S... See more...
You haven't followed the logic, you are using the wrong tokens in your searches. Try something like this <input type="dropdown" token="indextypeboss" searchWhenChanged="true"> <label>Select Index</label> <choice value="bexg-reservations-air">Live Index</choice> <choice value="summary-bex-aircpr-details">Summary Index</choice> <prefix>index="</prefix> <suffix>"</suffix> <change> <condition label="Live Index"> <set token="ShowLiveIndexboss"></set> <unset token="ShowSummaryIndexboss"></unset> </condition> <condition label="Summary Index"> <unset token="ShowLiveIndexboss"></unset> <set token="ShowSummaryIndexboss"></set> </condition> </change> <default>summary-bex-aircpr-details</default> </input> <input type="time" token="ctimeairboss" searchWhenChanged="true"> <label>Select Time Range</label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> <table depends="$ShowLiveIndexboss$"> <title>Success/Fail Ratio on selected TPID, Carrier &amp; GDS (Sorted by Failed Count)</title> <search> <query>$ShowLiveIndexboss$ my query</query> <earliest>$ctimeairboss.earliest$</earliest> <latest>$ctimeairboss.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> <table depends="$ShowSummaryIndexboss$"> <title>Success/Fail Ratio on selected TPID, Carrier &amp; GDS (Sorted by Failed Count)</title> <search> <query>$ShowSummaryIndexboss$ my query</query> <earliest>$ctimeairboss.earliest$</earliest> <latest>$ctimeairboss.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table>
As usual, I figured it out shortly after finally asking. Notes are kept in the mc_notes collection in the missioncontrol app, if anyone else was wondering...
Thank you very much Bishida! We tried AWS integration from Splunk's Data Management/Add Integration/AWS, which connected successfully. Anyway, we noted that the ECS widget shown in your image doesn... See more...
Thank you very much Bishida! We tried AWS integration from Splunk's Data Management/Add Integration/AWS, which connected successfully. Anyway, we noted that the ECS widget shown in your image doesn't show unless we have an EC2 ECS cluster, but not for our use case, wich is a Serverless (Fargate) Cluster.   Do you know if theres is any way to poll this kind of cluster information? Thanks!
Well, sometimes you have to work what you have. Nothing shameful about it Just be aware that this format might require you to craft your searches much more thoughtfully if you want them to be rel... See more...
Well, sometimes you have to work what you have. Nothing shameful about it Just be aware that this format might require you to craft your searches much more thoughtfully if you want them to be relatively quick. For example, since you have this whole keyname=something,value=somethingelse setup, you can't do a simple something=somethingelse search because that field isn't extracted and isn't know until you plow through your data with the foreach command. But you can limit your initial search results in that case by simply searching for "somethingelse" as search term regardless of where in the event it is. This can hugely improve your search times.
Hi There   Can you tell me why this is not working, I see both searches in both table depends token are executing I used your logic itself     <input type="dropdown" token="indextypeboss" ... See more...
Hi There   Can you tell me why this is not working, I see both searches in both table depends token are executing I used your logic itself     <input type="dropdown" token="indextypeboss" searchWhenChanged="true"> <label>Select Index</label> <choice value="bexg-reservations-air">Live Index</choice> <choice value="summary-bex-aircpr-details">Summary Index</choice> <prefix>index="</prefix> <suffix>"</suffix> <change> <condition label="Live Index"> <set token="ShowLiveIndexboss"></set> <unset token="ShowSummaryIndexboss"></unset> </condition> <condition label="Summary Index"> <unset token="ShowLiveIndexboss"></unset> <set token="ShowSummaryIndexboss"></set> </condition> </change> <default>summary-bex-aircpr-details</default> </input> <input type="time" token="ctimeairboss" searchWhenChanged="true"> <label>Select Time Range</label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> <table depends="$ShowLiveIndexboss$"> <title>Success/Fail Ratio on selected TPID, Carrier &amp; GDS (Sorted by Failed Count)</title> <search> <query>$indextypeboss$ my query</query> <earliest>$ctimeairboss.earliest$</earliest> <latest>$ctimeairboss.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> <table depends="$ShowSummaryIndexboss$"> <title>Success/Fail Ratio on selected TPID, Carrier &amp; GDS (Sorted by Failed Count)</title> <search> <query>$indextypeboss$ my query</query> <earliest>$ctimeairboss.earliest$</earliest> <latest>$ctimeairboss.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table>  
Hi there, we're currently migrating to ES 8 and need to see Work Notes (comments) provided by analysts in some dashboards/reports. Previously, the incident_updates_lookup contained the "comment" fie... See more...
Hi there, we're currently migrating to ES 8 and need to see Work Notes (comments) provided by analysts in some dashboards/reports. Previously, the incident_updates_lookup contained the "comment" field, which held this information, and was easy to access in a search. With ES 8, this was obviously mentioned as a limitation - "The Comments feature available in prior versions of Splunk Enterprise Security is now replaced by an enhanced capability to add notes." How can we access those notes (KV Store/Lookup/...) outside of having to click through the Mission Control/Analyst Queue manually? Where are they stored?
Hi Thanks for the answers on this. yes you have some good points that i will look into.  The data is distributed Trace from OTel. I am not sure how much I can change, but I will talk to the Develop... See more...
Hi Thanks for the answers on this. yes you have some good points that i will look into.  The data is distributed Trace from OTel. I am not sure how much I can change, but I will talk to the Developers. Cheers Robert
As we have recently enabled various audit settings on our domain, we now have 4662 events being generated on the DCs. I am now trying to reduce the number of 4662 events being forwarded to our Splun... See more...
As we have recently enabled various audit settings on our domain, we now have 4662 events being generated on the DCs. I am now trying to reduce the number of 4662 events being forwarded to our Splunk backend on the "front end" by tuning the inputs.conf on the DCs. The desired situation is that only events that contain one of the GUIDs that indicate a potential DCSync attack are being forwarded to Splunk: "Replicating Directory Changes all", "1131f6ad-9c07-11d1-f79f-00c04fc2dcd2" , "1131f6ac-9c07-11d1-f79f-00c04fc2dcd2"or "9923a32a+-3607-11d2-b9be-0000f87a36b2". (from https://www.praetorian.com/blog/active-directory-visualization-for-blue-teams-and-threat-hunters/) So dropping all 4662 events, except if they match any of these GUIDs. I've been playing with the existing blacklist line for events 4662 to fulfil this purpose, but can't seem to get it to work. Not even for one of these GUIDs like for example the below: blacklist1 = EventCode="4662" Message="Properties:\sControl\sAccess\s^(?!.*{1131f6ac-9c07-11d1-f79f-00c04fc2dcd2})" Obviously I've restarted the Splunk forwarder after every tweak. Anybody that can help with compiling a proper blacklist entry?
That is a wonderful answer, and thanks very much. I did find another issue where I have multiple lines on the same line, so I have accepted another answer - but that very much
Wow, what a great answer to my issues! I am using this, and I want to thank you very much again. Thanks. I did not spot the mr_batchID as unique - so i am using it now.
Hi @Hemant_h  The reply=1 suggests that the token is disabled (see https://docs.splunk.com/Documentation/Splunk/9.4.1/Data/TroubleshootHTTPEventCollector#:~:text=Forbidden-,Token%20disabled,-2)  Pl... See more...
Hi @Hemant_h  The reply=1 suggests that the token is disabled (see https://docs.splunk.com/Documentation/Splunk/9.4.1/Data/TroubleshootHTTPEventCollector#:~:text=Forbidden-,Token%20disabled,-2)  Please can you confirm that the token is enabled on your destination? You can also validate the token is working using https://<yourHECEndpoint>/services/collector/health?token=<yourToken> which should reply  {"text":"HEC is healthy","code":17}  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Short documentation reminder: https://docs.splunk.com/Documentation/DBX/3.18.2/DeployDBX/Prerequisites "KV store must also be active and working properly as of DB Connect version 3.10.0 and higher"
is your issue resolved? Getting the same error on HF for hec tokens
ERROR HttpInputDataHandler [3996076 HttpDedicatedIoThread-0] - Failed processing http input, token name=cnollc-cnoiwf-stg3.pegacloud.net, channel=n/a, source_IP=192.168.11.39, reply=1, events_process... See more...
ERROR HttpInputDataHandler [3996076 HttpDedicatedIoThread-0] - Failed processing http input, token name=cnollc-cnoiwf-stg3.pegacloud.net, channel=n/a, source_IP=192.168.11.39, reply=1, events_processed=0, http_input_body_size=524, parsing_err=""   Getting this error , we have done configuration for dual ingestion . The same Server is sending logs to both On-prem and Cloud env. How to fix these error