All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I would like to know the aim of this default constraint : (`cim_Authentication_indexes`) tag=authentication NOT (action=success user=*$) action="success"   Especially what does it tr... See more...
Hello, I would like to know the aim of this default constraint : (`cim_Authentication_indexes`) tag=authentication NOT (action=success user=*$) action="success"   Especially what does it try to match with user=*$ ? User accounts ending with $ symbol like in Windows?   Thanks.  
any ideas on TERM and PREFIX limitations with double dashes?     cat /tmp/test.txt abc//xyz abc::xyz abc==xyz abc@@xyz abc..xyz abc--xyz abc$$xyz abc##xyz abc%%xyz abc\\xyz abc__xyz search abc--x... See more...
any ideas on TERM and PREFIX limitations with double dashes?     cat /tmp/test.txt abc//xyz abc::xyz abc==xyz abc@@xyz abc..xyz abc--xyz abc$$xyz abc##xyz abc%%xyz abc\\xyz abc__xyz search abc--xyz # works TERM(abc--xyz) # doesn't work TERM(abc*) # works | tstats count by PREFIX(abc) # doesn't work for abc--xyz      Both TERM and PREFIX work with other minor segmenters like dots or underscores.   
Can you try changing the condition value to tok1* instead of tok1 ?   Here is a run anywhere example for reference <form version="1.1" theme="light"> <label>Tokens</label> <fieldset submitButt... See more...
Can you try changing the condition value to tok1* instead of tok1 ?   Here is a run anywhere example for reference <form version="1.1" theme="light"> <label>Tokens</label> <fieldset submitButton="false"> <input type="dropdown" token="project" searchWhenChanged="true"> <label>Project</label> <choice value="tok1*">Token1</choice> <choice value="tok2*">Token2</choice> <change> <condition value="tok1*"> <set token="x-key">Key1</set> </condition> <condition value="tok2*"> <set token="x-key">Key2</set> </condition> </change> </input> <input type="multiselect" token="Record"> <label>Record</label> <delimiter> ,</delimiter> <fieldForLabel>Record</fieldForLabel> <fieldForValue>Record</fieldForValue> <search> <query>|makeresults count=5|streamstats count|eval Record="Record".count|eval count=ceil(count/2)|eval project="tok".count."abc",x-key="Key".count|fields - _time,count|search project=$project$ AND x-key=$x-key$</query> </search> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> </fieldset> <row> <panel> <title>Project = $project$ , Key = $x-key$, Record = $Record$</title> <table> <title>Results</title> <search> <query>|makeresults count=5|streamstats count|eval Record="Record".count|eval count=ceil(count/2)|eval project="tok".count."abc",x-key="Key".count|fields - _time,count|where Record in ($Record$)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>
@Ayushi-Sriva , You can use the dynamic formatting, please refer here  https://docs.splunk.com/Documentation/Splunk/9.2.0/DashStudio/chartsSV#Add_emphasis_to_a_returned_value_using_dynamic_formatti... See more...
@Ayushi-Sriva , You can use the dynamic formatting, please refer here  https://docs.splunk.com/Documentation/Splunk/9.2.0/DashStudio/chartsSV#Add_emphasis_to_a_returned_value_using_dynamic_formatting Please find a run anywhere example  You can change the status in the drop down and will be reflected in the search by using token. Based on the status, the color changes { "visualizations": { "viz_NJsTjQl4": { "type": "splunk.singlevalue", "options": { "majorColor": "> majorValue | matchValue(majorColorEditorConfig)" }, "dataSources": { "primary": "ds_275I8YNY" }, "context": { "majorColorEditorConfig": [ { "match": "Running", "value": "#118832" }, { "match": "Stopped", "value": "#d41f1f" } ] } } }, "dataSources": { "ds_275I8YNY": { "type": "ds.search", "options": { "query": "| makeresults\n| eval value=\"$status$\"" }, "name": "Search_1" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" }, "input_BHJAbWl2": { "options": { "items": [ { "label": "Running", "value": "Running" }, { "label": "Stopped", "value": "Stopped" } ], "token": "status", "selectFirstSearchResult": true }, "title": "Status", "type": "input.dropdown" } }, "layout": { "type": "grid", "options": { "width": 1440, "height": 960 }, "structure": [ { "item": "viz_NJsTjQl4", "type": "block", "position": { "x": 0, "y": 0, "w": 1440, "h": 400 } } ], "globalInputs": [ "input_global_trp", "input_BHJAbWl2" ] }, "description": "", "title": "single_panel_studio" }  
It’s not about security. We have option to create alerts in  splunk. So we have action items as alert trigger, email alert, auto cut etc auto cut means creating service now incidents from Splunk to ... See more...
It’s not about security. We have option to create alerts in  splunk. So we have action items as alert trigger, email alert, auto cut etc auto cut means creating service now incidents from Splunk to service now  so for email alert we have schedule_search capability likewise what capability we need to give to create auto to create service now incident from Splunk to service now.    
There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splunk Have the server send syslog data to Splunk via a syslog server or Splunk C... See more...
There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splunk Have the server send syslog data to Splunk via a syslog server or Splunk Connect for Syslog Use the server's API to extract data for indexing Use Splunk DB Connect to pull data from the server's SQL database. Have the application send data directly to Splunk using HTTP Event Collector (HEC).
Is your lookup's ip field using CIDR notation? Did you set up CIDR(ip) in lookup definition? Is there any overlapping CIDR's (ip) that 10.10.10.10 can match in your lookup? About 3, for example,... See more...
Is your lookup's ip field using CIDR notation? Did you set up CIDR(ip) in lookup definition? Is there any overlapping CIDR's (ip) that 10.10.10.10 can match in your lookup? About 3, for example, if your lookup is like this: ip Description 10.10.10.10/27 New 10.10.10.0/24 In Progress 10.10.8.0/16 Closed 10.10.10.10 will match all three.
I know it's half year later, but the answer is still timewrap as @PickleRick and @ITWhisperer indicated.  Using OP's sample search,   | tstats count where index=msexchange host=SMEXCH13* earliest=-... See more...
I know it's half year later, but the answer is still timewrap as @PickleRick and @ITWhisperer indicated.  Using OP's sample search,   | tstats count where index=msexchange host=SMEXCH13* earliest=-14d@d latest=-0d@d by _time span=1h | timechart span=1h values(count) as count | timewrap 1w@w | eval _time = strftime(_time, "%H") | transpose 0 column_name=week header_field=_time | search week != _* | stats var(*) as * | transpose column_name=hour | rename "row 1" AS variance   (Note you also don't need to manually calculate variance from scratch.  Splunk stats has var function.) Obviously I do not have the same exchange data, but this can easily be simulated with index=_internal.  My results are hour variance 00 15664.5 01 72200 02 15488 03 63368 04 14792 05 51842 06 31752 07 69192 08 41123380.5 09 66612.5 10 127296968 11 51842 12 2380.5 13 36414578 14 3120.5 15 12.5 16 0.5 17 138095580.5 18 561694644.5 19 542027812.5 20 565084962 21 531966962 22 558916178 23 563304612.5 Simulation code is simply   | tstats count where index=_internal earliest=-14d@d latest=-0d@d by _time span=1h@h | timechart span=1h values(count) as count | timewrap 1w@w | eval _time = strftime(_time, "%H") | transpose 0 column_name=week header_field=_time | search week != _* | stats var(*) as * | transpose column_name=hour | rename "row 1" AS variance    
Thanks for replying.  These are outgoing calls, being initiated from within the webapp... I guess they are not actually different domains... same domain, but different backend services in our data ce... See more...
Thanks for replying.  These are outgoing calls, being initiated from within the webapp... I guess they are not actually different domains... same domain, but different backend services in our data center. https://api.office.fedex.com/parcel/fedexoffice/v1/locations/ I'm attaching a  screenshot.  I have all the api calls (ajax calls) collapsed into a single metric named 'parcel/fedexoffic (ALL)', but when I break it out, there are about 90 different api calls to this one 'parcel' service backend. I would like to somehow configure the adrum agent to send all the /parcel ajax calls, to their own EUM application container, and have all the remaining ajax call that are not /parcel, to go to a different container.
Are you talking about permissions?  Security might be a better forum. (There used to be a broader Admin forum but it's gone in the new board.) What is an "auto cut alert", anyway?
Bump.  I am working the exact same scenario. Transaction volume is a daily bell curve, so comparing volume from 16:00 to 15:00 is useless.  Tran volumes will always be increasing or decreasing hour ... See more...
Bump.  I am working the exact same scenario. Transaction volume is a daily bell curve, so comparing volume from 16:00 to 15:00 is useless.  Tran volumes will always be increasing or decreasing hour by hour. Transaction volume is a daily bell curve, so comparing today at 16:00 to the average of the last 24 hours is useless.  24 hour average might be 1M per hour, but 24 hour deviation might be + or - 50%. Transaction volume is also a weekly bell curve, so comparing today's 16:00 traffic to JUST yesterday's 16:00 traffic is okay, but still not great. Best case is exact scenario OP described: Compare today's 16:00-17:00 traffic to the AVERAGE of the last 7 or 14 days' 16:00-17:00 traffic and then alert based on a variance.
Hi @YuriSpirin, When we define our lookup with time_format = %s, the time field in our collection should have type number: # collections.conf [dhcp_timebased_lookup] enforceTypes = true field.time... See more...
Hi @YuriSpirin, When we define our lookup with time_format = %s, the time field in our collection should have type number: # collections.conf [dhcp_timebased_lookup] enforceTypes = true field.time = number field.ip = string field.hostname = string field.mac = string # transforms.conf [dhcp_timebased_lookup] collection = dhcp_timebased_lookup external_type = kvstore fields_list = time,ip,hostname,mac max_offset_secs = 691200 min_offset_secs = 0 time_field = time time_format = %s We can populate the lookup with test data: | makeresults format=csv data="time,ip,hostname,mac 1709251200,1.2.3.4,host-43,aa:bb:cc:dd:ee:ff 1709164800,1.2.3.4,host-42,aa:bb:cc:dd:ee:fe 1709078400,1.2.3.4,host-41,aa:bb:cc:dd:ee:fd" | outputlookup dhcp_timebased_lookup and validate it with an additional test: | makeresults format=csv data="_time,dest_ip 1709208000,1.2.3.4" | lookup dhcp_timebased_lookup ip as dest_ip output hostname _time dest_ip hostname 2024-02-29 07:00:00 1.2.3.4 host-42 We can also experiment with accelerated fields to improve performance, although we may not see the performance returns we expect: [dhcp_timebased_lookup] enforceTypes = true field.time = number field.ip = string field.hostname = string field.mac = string accelerated_fields.ip = {"time": -1, "ip": 1} Compare with a similar file-based lookup with a size less than or equal to the configured max_memtable_bytes limit: # limits.conf [lookup] # default 25 MB; increase max_memtable_bytes to a value greater than our # largest lookup file, assuming we have adequate phyiscal memory available max_memtable_bytes = 26214400  
Hello, I'm trying to search for my detectors based on the tags I gave them. I'm using terraform to create the charts and detectors. I gave the detectors tags but when I try to search for them in the ... See more...
Hello, I'm trying to search for my detectors based on the tags I gave them. I'm using terraform to create the charts and detectors. I gave the detectors tags but when I try to search for them in the signalfx UI nothing comes back. I see that sf_tags is something I can filter for but none of my tags work there. I also can't see any tag information on the detector anywhere.  Any guidance on how to get a list of my detectors based off of a tag would be helpful
We want to provide few capabilities to the team Presently team has a capability to create email alert. What capabilities need to be given to create auto cut alerts.
How do I get slurm log content into Splunk?
Either you indeed have very strange data (from more than 10 years ago?) or you have badly onboarded sources and date is wrongly parsed.
Easiest is to just have two alerts.  There's practically *zero* downsides to just building a new search (you can start with your existing one!) and then creating an alert out of it once it appears li... See more...
Easiest is to just have two alerts.  There's practically *zero* downsides to just building a new search (you can start with your existing one!) and then creating an alert out of it once it appears like you've got the search all sorted out. That being said, I think you might just need to change source = "/tmp/unresponsive" sourcetype=cmi:gems_unresponsive to be able to do both at once.  I'm not sure what you need to change that to though. MAYBE - if /tmp/unresponsive is the source for either server, maybe all it needs is source = "/tmp/unresponsive" ( sourcetype=cmi:gems_unresponsive OR sourcetype=<whatever the sourcetype is for the other servers> ) And honestly, I'd go back to that core piece of the search (index=foo, source=bar, sourcetype=baz) and *find the events* first.  It should make it more obvious how to get their data in there too.
What's the actual regex you are using to capture it? And if you are sure you are using the right syntax because you copy pasted it from some working one - you could try to add | eval state = "up" as... See more...
What's the actual regex you are using to capture it? And if you are sure you are using the right syntax because you copy pasted it from some working one - you could try to add | eval state = "up" as the last command in the search to force it to be "up" and see if that works.  If it doesn't, then I'd say there's something else wrong with that syntax.
Splunk doesn't apply any inherent extraction for data as you illustrated. (By default, it extracts key-value pairs connected by equal (=) sign, and some structured raw events such as JSON.)  If you s... See more...
Splunk doesn't apply any inherent extraction for data as you illustrated. (By default, it extracts key-value pairs connected by equal (=) sign, and some structured raw events such as JSON.)  If you see more fields in ASA feeds, it must be the doing of Splunk Add-on for Cisco ASA.  You need to open up that add-on and see what it is doing.  Then, you can copy it if that is within copyrights.  Or you can develop your own extraction strategy to emulate what Splunk Add-on for Cisco ASA does, or more. Given that the two data sources are so close in format, there is also a possibility that Splunk Add-on for Cisco ASA has some configuration you can tweak to include the FTD data type.  Consult its documentation, or contact the developers. This board used to have an app forum that I no longer see.  Maybe it is now Splunk Dev?  You can if Splunk Add-on for Cisco ASA developers are in that forum.
I don't think that's the right app to read those events.  In any case, the app you have installed had its latest release in 2018 and references no Splunk version higher than 7.1, so it looks abandone... See more...
I don't think that's the right app to read those events.  In any case, the app you have installed had its latest release in 2018 and references no Splunk version higher than 7.1, so it looks abandoned. Instead, it looks like the  "Cisco Secure eStreamer Client Add-On for Splunk" (https://splunkbase.splunk.com/app/3662) might extract fields from records with FTD in them.  It seems like it focuses on events 430001, 430002, 430003 and 430005.  Still, it's worth a shot. Indeed, right now you could see if you have those - try a search like index=<your cisco index> FTD (430001 OR 430002 OR 430003 OR 430005) If that returns a few items (or lots), then the app I mention above should turn that into useful fields. If that search does NOT return any events, ... well, widen the time frame.  These seem like they might be less common events, not run of the mill "every tcp session makes 42 zillion of them" so it's possible there's only a few per day or something. In any case, happy splunking and I hope you find what you need! -Rich