All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That is interesting. Didn't have oportunity to test it but if it is so, it looks like a support case material. See my other reply.
As far as I remember, there are two kinds of account that have names ending with $ (in Windows - for other systems it's highly unlikely that there will be an account named this way; but it would be n... See more...
As far as I remember, there are two kinds of account that have names ending with $ (in Windows - for other systems it's highly unlikely that there will be an account named this way; but it would be nice to account for that) - Managed Service Accounts (which @gcusello already mentioned) and computer accounts. Both of those account types are authenticated without using interactive authentication modes so they're irrelevant to the events you're looking for in this dataset.
Again, that's still a relatively unusual setup because normally you'd rather have a single bigger license and you should just set up a License Manager and split the license "internally" between index... See more...
Again, that's still a relatively unusual setup because normally you'd rather have a single bigger license and you should just set up a License Manager and split the license "internally" between indexers. If you managed to get two separate smaller license "counted" as one, each of them might indeed be non-enforcing. If you open Settings->Licensing and click on "All license details" you'll see if your installed license has "ConditionalLicensingEnforcement" or not. If it's indeed non-enforcing it will... well, not enforce license limits. (remember though that if you keep exceeding your license entitlement it might show, for example, in diag package when you create a support case and it might lead to some uncomfortable questions ;-))
If you're not sure about the assumptions then consider sharing the inputs.conf stanza so others can check it for you. Can you search for other data sources from the same UF?  Is the monitored file b... See more...
If you're not sure about the assumptions then consider sharing the inputs.conf stanza so others can check it for you. Can you search for other data sources from the same UF?  Is the monitored file being updated? How are you trying to search for the data?  Try using earliest=-1y latest=+1y in case timestamps are incorrect.
Hi @splunkreal, user names ending with $ are windows service accounts and usually they aren't relevant in authentication monitoring. Ciao. Giuseppe
To give you guys some more context, the 50GB is only a part of the complete license. We are migrating into a newly built environment and moved 100+GB to from the old to the new environment and starte... See more...
To give you guys some more context, the 50GB is only a part of the complete license. We are migrating into a newly built environment and moved 100+GB to from the old to the new environment and started migrating, thats why we are exceeding only the last few days on the old evironment now. So its not one big license but instead 2 different sized ones. Both are indeed no enforcement.
Hello, I would like to know the aim of this default constraint : (`cim_Authentication_indexes`) tag=authentication NOT (action=success user=*$) action="success"   Especially what does it tr... See more...
Hello, I would like to know the aim of this default constraint : (`cim_Authentication_indexes`) tag=authentication NOT (action=success user=*$) action="success"   Especially what does it try to match with user=*$ ? User accounts ending with $ symbol like in Windows?   Thanks.  
any ideas on TERM and PREFIX limitations with double dashes?     cat /tmp/test.txt abc//xyz abc::xyz abc==xyz abc@@xyz abc..xyz abc--xyz abc$$xyz abc##xyz abc%%xyz abc\\xyz abc__xyz search abc--x... See more...
any ideas on TERM and PREFIX limitations with double dashes?     cat /tmp/test.txt abc//xyz abc::xyz abc==xyz abc@@xyz abc..xyz abc--xyz abc$$xyz abc##xyz abc%%xyz abc\\xyz abc__xyz search abc--xyz # works TERM(abc--xyz) # doesn't work TERM(abc*) # works | tstats count by PREFIX(abc) # doesn't work for abc--xyz      Both TERM and PREFIX work with other minor segmenters like dots or underscores.   
Can you try changing the condition value to tok1* instead of tok1 ?   Here is a run anywhere example for reference <form version="1.1" theme="light"> <label>Tokens</label> <fieldset submitButt... See more...
Can you try changing the condition value to tok1* instead of tok1 ?   Here is a run anywhere example for reference <form version="1.1" theme="light"> <label>Tokens</label> <fieldset submitButton="false"> <input type="dropdown" token="project" searchWhenChanged="true"> <label>Project</label> <choice value="tok1*">Token1</choice> <choice value="tok2*">Token2</choice> <change> <condition value="tok1*"> <set token="x-key">Key1</set> </condition> <condition value="tok2*"> <set token="x-key">Key2</set> </condition> </change> </input> <input type="multiselect" token="Record"> <label>Record</label> <delimiter> ,</delimiter> <fieldForLabel>Record</fieldForLabel> <fieldForValue>Record</fieldForValue> <search> <query>|makeresults count=5|streamstats count|eval Record="Record".count|eval count=ceil(count/2)|eval project="tok".count."abc",x-key="Key".count|fields - _time,count|search project=$project$ AND x-key=$x-key$</query> </search> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> </fieldset> <row> <panel> <title>Project = $project$ , Key = $x-key$, Record = $Record$</title> <table> <title>Results</title> <search> <query>|makeresults count=5|streamstats count|eval Record="Record".count|eval count=ceil(count/2)|eval project="tok".count."abc",x-key="Key".count|fields - _time,count|where Record in ($Record$)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>
@Ayushi-Sriva , You can use the dynamic formatting, please refer here  https://docs.splunk.com/Documentation/Splunk/9.2.0/DashStudio/chartsSV#Add_emphasis_to_a_returned_value_using_dynamic_formatti... See more...
@Ayushi-Sriva , You can use the dynamic formatting, please refer here  https://docs.splunk.com/Documentation/Splunk/9.2.0/DashStudio/chartsSV#Add_emphasis_to_a_returned_value_using_dynamic_formatting Please find a run anywhere example  You can change the status in the drop down and will be reflected in the search by using token. Based on the status, the color changes { "visualizations": { "viz_NJsTjQl4": { "type": "splunk.singlevalue", "options": { "majorColor": "> majorValue | matchValue(majorColorEditorConfig)" }, "dataSources": { "primary": "ds_275I8YNY" }, "context": { "majorColorEditorConfig": [ { "match": "Running", "value": "#118832" }, { "match": "Stopped", "value": "#d41f1f" } ] } } }, "dataSources": { "ds_275I8YNY": { "type": "ds.search", "options": { "query": "| makeresults\n| eval value=\"$status$\"" }, "name": "Search_1" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" }, "input_BHJAbWl2": { "options": { "items": [ { "label": "Running", "value": "Running" }, { "label": "Stopped", "value": "Stopped" } ], "token": "status", "selectFirstSearchResult": true }, "title": "Status", "type": "input.dropdown" } }, "layout": { "type": "grid", "options": { "width": 1440, "height": 960 }, "structure": [ { "item": "viz_NJsTjQl4", "type": "block", "position": { "x": 0, "y": 0, "w": 1440, "h": 400 } } ], "globalInputs": [ "input_global_trp", "input_BHJAbWl2" ] }, "description": "", "title": "single_panel_studio" }  
It’s not about security. We have option to create alerts in  splunk. So we have action items as alert trigger, email alert, auto cut etc auto cut means creating service now incidents from Splunk to ... See more...
It’s not about security. We have option to create alerts in  splunk. So we have action items as alert trigger, email alert, auto cut etc auto cut means creating service now incidents from Splunk to service now  so for email alert we have schedule_search capability likewise what capability we need to give to create auto to create service now incident from Splunk to service now.    
There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splunk Have the server send syslog data to Splunk via a syslog server or Splunk C... See more...
There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splunk Have the server send syslog data to Splunk via a syslog server or Splunk Connect for Syslog Use the server's API to extract data for indexing Use Splunk DB Connect to pull data from the server's SQL database. Have the application send data directly to Splunk using HTTP Event Collector (HEC).
Is your lookup's ip field using CIDR notation? Did you set up CIDR(ip) in lookup definition? Is there any overlapping CIDR's (ip) that 10.10.10.10 can match in your lookup? About 3, for example,... See more...
Is your lookup's ip field using CIDR notation? Did you set up CIDR(ip) in lookup definition? Is there any overlapping CIDR's (ip) that 10.10.10.10 can match in your lookup? About 3, for example, if your lookup is like this: ip Description 10.10.10.10/27 New 10.10.10.0/24 In Progress 10.10.8.0/16 Closed 10.10.10.10 will match all three.
I know it's half year later, but the answer is still timewrap as @PickleRick and @ITWhisperer indicated.  Using OP's sample search,   | tstats count where index=msexchange host=SMEXCH13* earliest=-... See more...
I know it's half year later, but the answer is still timewrap as @PickleRick and @ITWhisperer indicated.  Using OP's sample search,   | tstats count where index=msexchange host=SMEXCH13* earliest=-14d@d latest=-0d@d by _time span=1h | timechart span=1h values(count) as count | timewrap 1w@w | eval _time = strftime(_time, "%H") | transpose 0 column_name=week header_field=_time | search week != _* | stats var(*) as * | transpose column_name=hour | rename "row 1" AS variance   (Note you also don't need to manually calculate variance from scratch.  Splunk stats has var function.) Obviously I do not have the same exchange data, but this can easily be simulated with index=_internal.  My results are hour variance 00 15664.5 01 72200 02 15488 03 63368 04 14792 05 51842 06 31752 07 69192 08 41123380.5 09 66612.5 10 127296968 11 51842 12 2380.5 13 36414578 14 3120.5 15 12.5 16 0.5 17 138095580.5 18 561694644.5 19 542027812.5 20 565084962 21 531966962 22 558916178 23 563304612.5 Simulation code is simply   | tstats count where index=_internal earliest=-14d@d latest=-0d@d by _time span=1h@h | timechart span=1h values(count) as count | timewrap 1w@w | eval _time = strftime(_time, "%H") | transpose 0 column_name=week header_field=_time | search week != _* | stats var(*) as * | transpose column_name=hour | rename "row 1" AS variance    
Thanks for replying.  These are outgoing calls, being initiated from within the webapp... I guess they are not actually different domains... same domain, but different backend services in our data ce... See more...
Thanks for replying.  These are outgoing calls, being initiated from within the webapp... I guess they are not actually different domains... same domain, but different backend services in our data center. https://api.office.fedex.com/parcel/fedexoffice/v1/locations/ I'm attaching a  screenshot.  I have all the api calls (ajax calls) collapsed into a single metric named 'parcel/fedexoffic (ALL)', but when I break it out, there are about 90 different api calls to this one 'parcel' service backend. I would like to somehow configure the adrum agent to send all the /parcel ajax calls, to their own EUM application container, and have all the remaining ajax call that are not /parcel, to go to a different container.
Are you talking about permissions?  Security might be a better forum. (There used to be a broader Admin forum but it's gone in the new board.) What is an "auto cut alert", anyway?
Bump.  I am working the exact same scenario. Transaction volume is a daily bell curve, so comparing volume from 16:00 to 15:00 is useless.  Tran volumes will always be increasing or decreasing hour ... See more...
Bump.  I am working the exact same scenario. Transaction volume is a daily bell curve, so comparing volume from 16:00 to 15:00 is useless.  Tran volumes will always be increasing or decreasing hour by hour. Transaction volume is a daily bell curve, so comparing today at 16:00 to the average of the last 24 hours is useless.  24 hour average might be 1M per hour, but 24 hour deviation might be + or - 50%. Transaction volume is also a weekly bell curve, so comparing today's 16:00 traffic to JUST yesterday's 16:00 traffic is okay, but still not great. Best case is exact scenario OP described: Compare today's 16:00-17:00 traffic to the AVERAGE of the last 7 or 14 days' 16:00-17:00 traffic and then alert based on a variance.
Hi @YuriSpirin, When we define our lookup with time_format = %s, the time field in our collection should have type number: # collections.conf [dhcp_timebased_lookup] enforceTypes = true field.time... See more...
Hi @YuriSpirin, When we define our lookup with time_format = %s, the time field in our collection should have type number: # collections.conf [dhcp_timebased_lookup] enforceTypes = true field.time = number field.ip = string field.hostname = string field.mac = string # transforms.conf [dhcp_timebased_lookup] collection = dhcp_timebased_lookup external_type = kvstore fields_list = time,ip,hostname,mac max_offset_secs = 691200 min_offset_secs = 0 time_field = time time_format = %s We can populate the lookup with test data: | makeresults format=csv data="time,ip,hostname,mac 1709251200,1.2.3.4,host-43,aa:bb:cc:dd:ee:ff 1709164800,1.2.3.4,host-42,aa:bb:cc:dd:ee:fe 1709078400,1.2.3.4,host-41,aa:bb:cc:dd:ee:fd" | outputlookup dhcp_timebased_lookup and validate it with an additional test: | makeresults format=csv data="_time,dest_ip 1709208000,1.2.3.4" | lookup dhcp_timebased_lookup ip as dest_ip output hostname _time dest_ip hostname 2024-02-29 07:00:00 1.2.3.4 host-42 We can also experiment with accelerated fields to improve performance, although we may not see the performance returns we expect: [dhcp_timebased_lookup] enforceTypes = true field.time = number field.ip = string field.hostname = string field.mac = string accelerated_fields.ip = {"time": -1, "ip": 1} Compare with a similar file-based lookup with a size less than or equal to the configured max_memtable_bytes limit: # limits.conf [lookup] # default 25 MB; increase max_memtable_bytes to a value greater than our # largest lookup file, assuming we have adequate phyiscal memory available max_memtable_bytes = 26214400  
Hello, I'm trying to search for my detectors based on the tags I gave them. I'm using terraform to create the charts and detectors. I gave the detectors tags but when I try to search for them in the ... See more...
Hello, I'm trying to search for my detectors based on the tags I gave them. I'm using terraform to create the charts and detectors. I gave the detectors tags but when I try to search for them in the signalfx UI nothing comes back. I see that sf_tags is something I can filter for but none of my tags work there. I also can't see any tag information on the detector anywhere.  Any guidance on how to get a list of my detectors based off of a tag would be helpful
We want to provide few capabilities to the team Presently team has a capability to create email alert. What capabilities need to be given to create auto cut alerts.