All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @bowesmana , sorry for confusion There is only one index="a" Yes line 4 has no purpose for now, and I want to pick a user from sub query let's say user1 we got at 1:23:20AM  from sub query now I... See more...
Hi @bowesmana , sorry for confusion There is only one index="a" Yes line 4 has no purpose for now, and I want to pick a user from sub query let's say user1 we got at 1:23:20AM  from sub query now I want to look for all the logs in the mentioned main query for user1 within 1:25:20AM. And if I get one then I want it to be listed. And also I want only the users from sub query to be in the list but no other users which it picks up from main query. Please let me know if I was able to explain it 
當我在SH設置好props.conf後去看我的uf端並重啟就會出現以下錯誤: Checking conf files for problems... Invalid key in stanza [web:access] in /opt/splunkforwarder/etc/apps/dynasafe_course_demo_ta/local/props.conf, line 3: ENVE... See more...
當我在SH設置好props.conf後去看我的uf端並重啟就會出現以下錯誤: Checking conf files for problems... Invalid key in stanza [web:access] in /opt/splunkforwarder/etc/apps/dynasafe_course_demo_ta/local/props.conf, line 3: ENVENT_BREAKER (value: ([\r\n]+)). Invalid key in stanza [web:secure] in /opt/splunkforwarder/etc/apps/dynasafe_course_demo_ta/local/props.conf, line 6: ENVENT_BREAKER (value: ([\r\n]+)). Your indexes and inputs configurations are not internally consistent. 這是怎麼回事
I'm not sure if bug in fieldformat has already fixed or not. At least some earlier versions it didn't work correctly in all cases inside foreach loop. Nope.  These are from 9.2.0.1. I beg... See more...
I'm not sure if bug in fieldformat has already fixed or not. At least some earlier versions it didn't work correctly in all cases inside foreach loop. Nope.  These are from 9.2.0.1. I begin to suspect that this is by design.   
Maybe this helps you in future https://conf.splunk.com/files/2019/recordings/FN1315.mp4 https://conf.splunk.com/files/2019/slides/FN1315.pdf "Did we just lose ALL our knowledge objects? Do you k... See more...
Maybe this helps you in future https://conf.splunk.com/files/2019/recordings/FN1315.mp4 https://conf.splunk.com/files/2019/slides/FN1315.pdf "Did we just lose ALL our knowledge objects? Do you know how much time and energy that was?" After a destructive resync, Paychex lost two months of its knowledge object creations/modifications. We learned to be prepared if it were to ever happen again. How? It's easier than you might think, and you don't have to be an admin. You’ll learn how to proactively save your work (dashboards, reports, data models, MLTK experiments, ITSI glass tables, macros, views, etc.) and audit changes when they occur. You will leave the session knowing how to manage the ever-increasing amount of things you create. You'll also have solutions that can save you time and effort from having to recreate lost/modified objects, including how to restore service faster. You also will come away with peace of mind knowing that you can take control of safeguarding and protecting your work, thereby covering your assets when a disaster happens.  
@gcusello The last event to come from the _internal index was on February 27, 2024, and below is the result of your search. I am pasting it below. Could you please help me understand the issue with t... See more...
@gcusello The last event to come from the _internal index was on February 27, 2024, and below is the result of your search. I am pasting it below. Could you please help me understand the issue with the queue.   
Hi Usually you will get some hints about product by looking the directory hierarchy and names on it. r. Ismo
@gcusello How can execute your suggested search? It starts with index=_internal, and there is no data coming from that index.
Hi You have those on indexer(s)/heavy forwarders and your source is probably json? Do you have on SH side KV_MODE=json definition for those sourcetypes? r. Ismo
Hi one option is use authorize.conf with the next values srchTimeWin = <integer> * Maximum time range, in seconds, of a search. * The Splunk platform applies this search time range limit backwards ... See more...
Hi one option is use authorize.conf with the next values srchTimeWin = <integer> * Maximum time range, in seconds, of a search. * The Splunk platform applies this search time range limit backwards from the latest time specified for a search. * If a user has multiple roles with distinct search time range limits, or has roles that inherit from roles with distinct search time range limits, the Splunk platform applies the least restrictive search time range limits to the role. * For example, if user X has role A (srchTimeWin = 30s), role B (srchTimeWin = 60s), and role C (srchTimeWin = 3600s), user X gets a maximum search time range of 1 hour. * When set to '-1', the role does not have a search time range limit. This value can be overidden by the maximum search time range value of an inherited role. * When set to '0' (infinite), the role does not have a search time range limit. This value cannot be overidden by the maximum search time range value of an inherited role. * This setting does not apply to real-time searches. * Default: -1 srchTimeEarliest = <integer> * The earliest event time that can be searched, in seconds before the current wall clock time. * If a user is a member of a role with a 'srchTimeEarliest' limit, or a role that inherits from other roles with 'srchTimeEarliest' limits, the Splunk platform applies the least restrictive time limit from the roles to the user. * For example, if a user is a member of role A (srchTimeEarliest = 86400), and inherits role B (srchTimeEarliest = 3600) and role C (srchTimeEarliest = -1 (default)), the user gets an effective earliest time limit of 1 day (86400 seconds) ago. * When set to '-1', the role does not have a earliest time limit. This value can be overidden by the earliest time value of an inherited role. * When set to '0' (infinite), the role does not have an earliest time limit. This value cannot be overidden by the earliest time limit value of an inherited role. * This setting does not apply to real-time searches. * Default: -1 With those you can define earliest and also search span. Just create a separate role as you normal user an use it for these users which you want to apply this restrictions. r. Ismo 
Hi Maries, Please try with the below search <your curl command> -d search= "search index=<indexname> | stats count as field1 | eval field1="dallvcflwb110u,yes;dallvcflwb120u,yes" | eval field1=sp... See more...
Hi Maries, Please try with the below search <your curl command> -d search= "search index=<indexname> | stats count as field1 | eval field1="dallvcflwb110u,yes;dallvcflwb120u,yes" | eval field1=split(field1,";") | mvexpand field1 | rex field=field1 \"(?<host>.*),(?<mode>.*)\" | table host mode | outputlookup atlassian_maintenance.csv"
Hi Those should removed quite soon. Usually it depends on your environment somehow. Are you sure that you canceled and/or remove those jobs from all apps? For users it seems to show those only in c... See more...
Hi Those should removed quite soon. Usually it depends on your environment somehow. Are you sure that you canceled and/or remove those jobs from all apps? For users it seems to show those only in current app in default mode when you have press Activity tab. Just select all for App selection and see that all have removed. r. Ismo
Hi @billy , at first, don't attach a new question to another one especially when closed because it's more difficoult to have an answer, it's always better to open a new question, even if with the sa... See more...
Hi @billy , at first, don't attach a new question to another one especially when closed because it's more difficoult to have an answer, it's always better to open a new question, even if with the same topic, to have a surely faster and probably better answer. Anyway, in this way you block the Splunk monitoring and it isn't a good idea because you're blind on Splunk running. Why do you want this? the Splunk logs don't consume license and you can limit the storage consuption using a limited (e.g. 7 days) retention on these logs. Anyway, are you sure that you continue to receive these logs from that Forwarder? I say this becsue with the configuration you shared isn't possible to receive these logs from that Forwarder. Check if the logs you're receiving have that source (the ones in the monitor stanza header) and that host (the Forwarder where you changed the configuration. Ciao. Giuseppe
Hi please don't add your admin user + it's pass to any posts. Or actually write those on cmd as those are stored into history files and/or are seen on process list! Much better way is to read those ... See more...
Hi please don't add your admin user + it's pass to any posts. Or actually write those on cmd as those are stored into history files and/or are seen on process list! Much better way is to read those into variable and then use that on queries. You could do it like   read USERPASS admin:<your pass here> ^D curl -ku $USERPASS .....   Also don't add your real node name into examples! As you are in linux/*nix you could replace those outer " with ' and then it should work. Then you are not needs \" inside your SPL. curl -ku $USERPASS https://<your splunk SH>:<mgmt port>/servicesNS/admin/SRE/search/jobs/export -d search='| stats count as field1 | eval field1="dallvcflwb110u,yes;dallvcflwb120u,yes" | eval field1=split(field1,";") | mvexpand field1 | rex field=field1 "(?<host>.*),(?<mode>.*)" | table host mode | outputlookup atlassian_maintenance.csv' In windows this didn't work ;-( r. Ismo 
  Yes, for example this seach:   search index=* | eval ip="8.8.8.8" | search ip | stats count by index | eval result=if(count>0, "IP found", "IP not found")
Hi @uagraw01 , yes, as I said, i experienced this issue in some Splunk installations when there was a queue congestion in Splunk Data Flow from the Forwarders to the Indexers. In these cases, the _... See more...
Hi @uagraw01 , yes, as I said, i experienced this issue in some Splunk installations when there was a queue congestion in Splunk Data Flow from the Forwarders to the Indexers. In these cases, the _internal logs have a less priority than the other logs so they arrive late or they don't arrive. You can check the queue on your forwarders using a simple search: index=_internal source=*metrics.log sourcetype=splunkd group=queue | eval name=case(name=="aggqueue","2 - Aggregation Queue", name=="indexqueue", "4 - Indexing Queue", name=="parsingqueue", "1 - Parsing Queue", name=="typingqueue", "3 - Typing Queue", name=="splunktcpin", "0 - TCP In Queue", name=="tcpin_cooked_pqueue", "0 - TCP In Queue") | eval max=if(isnotnull(max_size_kb),max_size_kb,max_size) | eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size) | eval fill_perc=round((curr/max)*100,2) | bin _time span=1m | stats Median(fill_perc) AS "fill_percentage" perc90(fill_perc) AS "90_perc" max(max) AS max max(curr) AS curr by host, _time, name | where (fill_percentage>70 AND name!="4 - Indexing Queue") OR (fill_percentage>70 AND name="4 - Indexing Queue") | sort -_time Ciao. Giuseppe
Hi have you check / asked it there is Splunk Workload management rules implemented for this search? r. Ismo
You could look that same data from OS level from some of those log under $SPLUNK_HOME/var/log/splunk/ There are at least splunkd.log metrics.log etc. Those contains all same data as you have in _int... See more...
You could look that same data from OS level from some of those log under $SPLUNK_HOME/var/log/splunk/ There are at least splunkd.log metrics.log etc. Those contains all same data as you have in _internal. Of course you must have shell level access to those all source hosts to see this. You should just look couple of pages later where is said "Using "grep" cli command". In that and some pages after that is told/show how you can do it on command line with those log files like metrics.log.
I’ve been running into an issue with the Splunk query which have been using since long time and seeing the following error message: “Please select a shorter time duration for your query,” even when I... See more...
I’ve been running into an issue with the Splunk query which have been using since long time and seeing the following error message: “Please select a shorter time duration for your query,” even when I’m using a 5-minute time range. I noticed that this error seems to pop up when we use latest=now() in our queries to get the most recent data.However, when I tried the same query with a specific time range, like earliest=-xxh@h latest=-xxh@h, it worked just fine. Any ideas on why latest=now() might not be fetching results as expected? And if there is any resolution to working with latest=now()
This is working when we query directly from Splunk Search..  | stats count as field1 | eval field1="dallvcflwb110u,yes;dallvcflwb120u,yes" | eval field1=split(field1,";") | mvexpand field1 | rex fi... See more...
This is working when we query directly from Splunk Search..  | stats count as field1 | eval field1="dallvcflwb110u,yes;dallvcflwb120u,yes" | eval field1=split(field1,";") | mvexpand field1 | rex field=field1 "(?<host>.*),(?<mode>.*)" | table host mode | outputlookup atlassian_maintenance.csv   But when we try hitting using curl and its failing .  curl -k -u admin:Vzadmin@12 https://dallpsplsh01sp.tpd-soe.net:8089/servicesNS/admin/SRE/search/jobs/export -d search="| stats count as field1 | eval field1="dallvcflwb110u,yes;dallvcflwb120u,yes" | eval field1=split(field1,";") | mvexpand field1 | rex field=field1 "(?<host>.*),(?<mode>.*)" | table host mode | outputlookup atlassian_maintenance.csv"   -bash: syntax error near unexpected token `?'        
Hi, Can you please use the mail server parameter with the email ID as mentioned in the below docs. server="server info" https://docs.splunk.com/Documentation/Splunk/8.1.0/Alert/Emailnotification ... See more...
Hi, Can you please use the mail server parameter with the email ID as mentioned in the below docs. server="server info" https://docs.splunk.com/Documentation/Splunk/8.1.0/Alert/Emailnotification