All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Are you using HEC or UF's s2s over http? Your token name is little bit weird to use as normal HEC token. Officially those format should be like GUID, but I know that at least with earlier versions als... See more...
Are you using HEC or UF's s2s over http? Your token name is little bit weird to use as normal HEC token. Officially those format should be like GUID, but I know that at least with earlier versions also other formats have worked.
Other already commented this. So only some additions and clarifications. In Splunk you should think that one sourcetype is one lexical format of event. So if events have two different field amount, ... See more...
Other already commented this. So only some additions and clarifications. In Splunk you should think that one sourcetype is one lexical format of event. So if events have two different field amount, field order or even differently formatted timestamps or timestamps are in different places you should have separate sourcetypes for those. As @livehybrid shows you can extract and use different timestamp formats and evaluate those correctly with INGEST_EVAL. There are couple of examples in community and also some .conf presentations have some additional examples. The easiest way to test this is just ingest those into your test environment/test indexes and then use SPL and eval in one line to check how you can get correct format. You could see e.g.  https://community.splunk.com/t5/Getting-Data-In/Best-way-to-extract-time-from-file-name-and-text/m-p/677542 https://conf.splunk.com/files/2020/slides/PLA1154C.pdf Those contains some examples. Also be sure if you need to use := instead of =. r. Ismo
Wait a second. Technicalities aside, it seems you're trying to do exactly opposite what you say you want to do.
I'm not 100% sure what you want to do and you're being quite vague about it. As @livehybrid already said, there are some ways to overwrite the default timestamp recognition but I'll add to it that it... See more...
I'm not 100% sure what you want to do and you're being quite vague about it. As @livehybrid already said, there are some ways to overwrite the default timestamp recognition but I'll add to it that it's needlessly complicated, might be difficult to maintain and adds extra load on the indexers since the timestamp has to be parsed twice out of the event. While dynamical routing to another index is a pretty common thing, recasting one general sourcetype to "subsourcetypes" which are slightly differently parsed into fields in search-time is also not unusual. But spllitting single sourcetype/source/host stream into completely differently treated events is typically an indication that someone didn't bother to properly classify and split the data upstream (like reading whole /var/log/messages or getting syslog from the whole environment as "syslog" sourcetype).
@cmutt78_2  Were you able to see the data input after restarting the Splunk services, or is it still missing? My Akamai Data input:- Where did you install the Akamai add-on, on the Heavy Forwa... See more...
@cmutt78_2  Were you able to see the data input after restarting the Splunk services, or is it still missing? My Akamai Data input:- Where did you install the Akamai add-on, on the Heavy Forwarder (HF)? If it's on the HF, does it have a valid license? Some features require a license, which aren't available with the Free license.  For a heavy forwarder (HF), you should set up one of the following options: 1) Make the HF a slave of a license master. This will give the HF all of the enterprise capabilities - and the HF will consume no license, as long as it does not index data. 2) Install the forwarder license. This will give the HF many enterprise capabilities, but not all. The HF will be able to parse and forward data. However, it will not be permitted to index and it will not be able to act as a deployment server (as an example). This is the option I would usually choose. 
It is not clear what your issue is - if you specify earliest and latest using the format you have used, they appear to be passed to a macro (that begins with "index=...") - if you don't specify an ov... See more...
It is not clear what your issue is - if you specify earliest and latest using the format you have used, they appear to be passed to a macro (that begins with "index=...") - if you don't specify an overriding time, the time specified by the search also seem to be used. Please provide more precise detail as to what your macro actually is (obfuscating as minimally as possible) and how you have used it in the search, and how you have set up the alert.
@cmutt78_2  Could you please check the splunkd.log file? It may contain information explaining why the data input from the add-on isn't appearing.  
yep, I am thinking it is an app issue
@cmutt78_2  After installing Akamai Splunk Connector, Did you try to restart splunk instance? 
Not there and no additional pages to navigate  
@cmutt78_2  Please try clicking on Settings -> then click on Data Inputs and then look for Akamai​ Security Incident Event Manager API. Once you locate it, click on it and follow the instructions me... See more...
@cmutt78_2  Please try clicking on Settings -> then click on Data Inputs and then look for Akamai​ Security Incident Event Manager API. Once you locate it, click on it and follow the instructions mentioned on this page: https://techdocs.akamai.com/siem-integration/docs/siem-splunk-connector#install-the-splunk-connector 
Upon installing the Akamai SIEM I am not seeing the data input option for "Akamai​ Security Incident Event Manager API", please advise?  Java is installed and running Splunk 9.3.3
Hi @blanky  It can get pretty complicated trying to extract two different timestamp formats from the same sourcetype - but it isnt impossible. You could try something like this: == transforms.conf... See more...
Hi @blanky  It can get pretty complicated trying to extract two different timestamp formats from the same sourcetype - but it isnt impossible. You could try something like this: == transforms.conf == [yourSourcetype] TRANSFORM-overwriteTime = overwriteTime == props.conf == [overwriteTime] INGEST_EVAL = _time=coalesce(strptime(substr(_raw,0,25),"%Y-%m-%d %H:%M:%S"),_time) This would try and extract the time using the format provided out of the first 25 characters of the _raw event (adjust accordingly) and if that fails it falls back on _time previously determined).  This allows you to overwrite the _time extraction for your other data. You can develop this further depending on the various events coming in if necessary. For more context on this check out Richard Morgan's fantastic props/transforms examples at https://github.com/silkyrich/ingest_eval_examples/blob/master/default/transforms.conf#L9 For time format variables see https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Commontimeformatvariables  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
We are collecting various data from security equipment. The data is being stored in index=sec_A and received as sourtype=A_syslog. Here, in the props.conf setting, several data are filtered as ... See more...
We are collecting various data from security equipment. The data is being stored in index=sec_A and received as sourtype=A_syslog. Here, in the props.conf setting, several data are filtered as follows, and the data is stored by dividing it into different source types and indexes. [A_syslog] TRANSFORMS-<class_A> = a, b, c, d TRANSFORMS-<class_B> = e, f, g Here, I want to add additional data to be filtered by b, but these data are different from the data currently being collected and timestamp REGEX, so I think I need to collect them in a different way. Is there a way to specify a different timestamp value only for the data being added while the data collection is continuing?
My macro looks like this [|makeresults count=0] | append [ search `mymacro` | rex --- | rex --- | rex --- | eval -- | eval --- | fields _time, -,-] | lookup --- | lookup --- | lookup --- ... See more...
My macro looks like this [|makeresults count=0] | append [ search `mymacro` | rex --- | rex --- | rex --- | eval -- | eval --- | fields _time, -,-] | lookup --- | lookup --- | lookup --- | search --- ---------------------------------- I'm building a scheduled alert which runs this macro using earliest and latest time period earliest="04/11/2025:12:10:01" latest="04/11/2025:12:20:01" `mymacro` | table _time IP So this time range is not passing within above macro subquery which is nested.   Hope this give you more info.
I know this is old, but it's the first Google result. A workaround is Calculated fields can use |eval lookup(). The lookup must be a .csv file on the search head(s).   https://docs.splunk.com/Docu... See more...
I know this is old, but it's the first Google result. A workaround is Calculated fields can use |eval lookup(). The lookup must be a .csv file on the search head(s).   https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/definecalcfields https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/ConditionalFunctions
A workaround is Calculated fields can use |eval lookup(). The lookup must be a .csv file on the search head(s).   https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/definecalcfields htt... See more...
A workaround is Calculated fields can use |eval lookup(). The lookup must be a .csv file on the search head(s).   https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/definecalcfields https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/ConditionalFunctions
aha, glad I asked; I now understand it.   the $ShowLiveIndex$ and $ShowSummaryIndex$ in search query puts in "" ... .the double quote signs, which technically means nothing.....and hence it works  ... See more...
aha, glad I asked; I now understand it.   the $ShowLiveIndex$ and $ShowSummaryIndex$ in search query puts in "" ... .the double quote signs, which technically means nothing.....and hence it works    thanks a lot This will help me a lot to make one Dashboard Dual, Live Query and Summary Index switchable rather than 2 versions as we were keeping Super thanks again   
You haven't followed the logic, you are using the wrong tokens in your searches. Try something like this <input type="dropdown" token="indextypeboss" searchWhenChanged="true"> <label>S... See more...
You haven't followed the logic, you are using the wrong tokens in your searches. Try something like this <input type="dropdown" token="indextypeboss" searchWhenChanged="true"> <label>Select Index</label> <choice value="bexg-reservations-air">Live Index</choice> <choice value="summary-bex-aircpr-details">Summary Index</choice> <prefix>index="</prefix> <suffix>"</suffix> <change> <condition label="Live Index"> <set token="ShowLiveIndexboss"></set> <unset token="ShowSummaryIndexboss"></unset> </condition> <condition label="Summary Index"> <unset token="ShowLiveIndexboss"></unset> <set token="ShowSummaryIndexboss"></set> </condition> </change> <default>summary-bex-aircpr-details</default> </input> <input type="time" token="ctimeairboss" searchWhenChanged="true"> <label>Select Time Range</label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> <table depends="$ShowLiveIndexboss$"> <title>Success/Fail Ratio on selected TPID, Carrier &amp; GDS (Sorted by Failed Count)</title> <search> <query>$ShowLiveIndexboss$ my query</query> <earliest>$ctimeairboss.earliest$</earliest> <latest>$ctimeairboss.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> <table depends="$ShowSummaryIndexboss$"> <title>Success/Fail Ratio on selected TPID, Carrier &amp; GDS (Sorted by Failed Count)</title> <search> <query>$ShowSummaryIndexboss$ my query</query> <earliest>$ctimeairboss.earliest$</earliest> <latest>$ctimeairboss.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table>
As usual, I figured it out shortly after finally asking. Notes are kept in the mc_notes collection in the missioncontrol app, if anyone else was wondering...