All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I encountered the same problem after ES (7.3.0) installation and what Giuseppe say is correct about the RAM. To avoid the issue edit alert "Audit - ES System Requirements" on SH ES and adjust the RA... See more...
I encountered the same problem after ES (7.3.0) installation and what Giuseppe say is correct about the RAM. To avoid the issue edit alert "Audit - ES System Requirements" on SH ES and adjust the RAM value. Splunk expect 32000MB RAM into the check but your system can report 31750MB as 32GB RAM. Regards, Antonio
Hi Marnell,   Yes this error is happening on my 2 HF's and deployment servers. All have 12GB of RAM with 10GB available.
@meshorer  when using the format input or format block for JSON you need to use double { & } and encase the value in " such as the below (which I just tested): {{"new_field": "{0}"}} Note that yo... See more...
@meshorer  when using the format input or format block for JSON you need to use double { & } and encase the value in " such as the below (which I just tested): {{"new_field": "{0}"}} Note that you don't need to double the {&} on the {0} as it's a replacement element but the actual JSON elements will need escaping in this way, even if you had nested JSON like the below: {{"new_field": {{"sub_field": "{0}"}}}} -- Hope this helps! If so please mark as a solution for future SOARers. Happy SOARing! --
hello all! I am trying to add  field to an artifact with "update artifact" action (phantom app). i am trying to add a 'message parameter' in the 'value' at the cef_json field: for example: {"new_... See more...
hello all! I am trying to add  field to an artifact with "update artifact" action (phantom app). i am trying to add a 'message parameter' in the 'value' at the cef_json field: for example: {"new_field": {0}} but unfortunately I get "key_error" and the action failed.  how can I solve it?
@ryanaa Line breaking is not possible with the universal forwarder.  The indexer or HF is responsible for that. The EVENT_BREAKER setting is the only one that functions with UF; nevertheless, it sim... See more...
@ryanaa Line breaking is not possible with the universal forwarder.  The indexer or HF is responsible for that. The EVENT_BREAKER setting is the only one that functions with UF; nevertheless, it simply instructs UF to identify the boundaries of events and causes it to deliver whole events to indexers.   Try to apply this settings on an heavy forwarder or indexer.   If this reply helps you an upvote and "Accept as Solution" is appreciated.
Where we have to run the abv query 
Hi @bowesmana , sorry for confusion There is only one index="a" Yes line 4 has no purpose for now, and I want to pick a user from sub query let's say user1 we got at 1:23:20AM  from sub query now I... See more...
Hi @bowesmana , sorry for confusion There is only one index="a" Yes line 4 has no purpose for now, and I want to pick a user from sub query let's say user1 we got at 1:23:20AM  from sub query now I want to look for all the logs in the mentioned main query for user1 within 1:25:20AM. And if I get one then I want it to be listed. And also I want only the users from sub query to be in the list but no other users which it picks up from main query. Please let me know if I was able to explain it 
當我在SH設置好props.conf後去看我的uf端並重啟就會出現以下錯誤: Checking conf files for problems... Invalid key in stanza [web:access] in /opt/splunkforwarder/etc/apps/dynasafe_course_demo_ta/local/props.conf, line 3: ENVE... See more...
當我在SH設置好props.conf後去看我的uf端並重啟就會出現以下錯誤: Checking conf files for problems... Invalid key in stanza [web:access] in /opt/splunkforwarder/etc/apps/dynasafe_course_demo_ta/local/props.conf, line 3: ENVENT_BREAKER (value: ([\r\n]+)). Invalid key in stanza [web:secure] in /opt/splunkforwarder/etc/apps/dynasafe_course_demo_ta/local/props.conf, line 6: ENVENT_BREAKER (value: ([\r\n]+)). Your indexes and inputs configurations are not internally consistent. 這是怎麼回事
I'm not sure if bug in fieldformat has already fixed or not. At least some earlier versions it didn't work correctly in all cases inside foreach loop. Nope.  These are from 9.2.0.1. I beg... See more...
I'm not sure if bug in fieldformat has already fixed or not. At least some earlier versions it didn't work correctly in all cases inside foreach loop. Nope.  These are from 9.2.0.1. I begin to suspect that this is by design.   
Maybe this helps you in future https://conf.splunk.com/files/2019/recordings/FN1315.mp4 https://conf.splunk.com/files/2019/slides/FN1315.pdf "Did we just lose ALL our knowledge objects? Do you k... See more...
Maybe this helps you in future https://conf.splunk.com/files/2019/recordings/FN1315.mp4 https://conf.splunk.com/files/2019/slides/FN1315.pdf "Did we just lose ALL our knowledge objects? Do you know how much time and energy that was?" After a destructive resync, Paychex lost two months of its knowledge object creations/modifications. We learned to be prepared if it were to ever happen again. How? It's easier than you might think, and you don't have to be an admin. You’ll learn how to proactively save your work (dashboards, reports, data models, MLTK experiments, ITSI glass tables, macros, views, etc.) and audit changes when they occur. You will leave the session knowing how to manage the ever-increasing amount of things you create. You'll also have solutions that can save you time and effort from having to recreate lost/modified objects, including how to restore service faster. You also will come away with peace of mind knowing that you can take control of safeguarding and protecting your work, thereby covering your assets when a disaster happens.  
@gcusello The last event to come from the _internal index was on February 27, 2024, and below is the result of your search. I am pasting it below. Could you please help me understand the issue with t... See more...
@gcusello The last event to come from the _internal index was on February 27, 2024, and below is the result of your search. I am pasting it below. Could you please help me understand the issue with the queue.   
Hi Usually you will get some hints about product by looking the directory hierarchy and names on it. r. Ismo
@gcusello How can execute your suggested search? It starts with index=_internal, and there is no data coming from that index.
Hi You have those on indexer(s)/heavy forwarders and your source is probably json? Do you have on SH side KV_MODE=json definition for those sourcetypes? r. Ismo
Hi one option is use authorize.conf with the next values srchTimeWin = <integer> * Maximum time range, in seconds, of a search. * The Splunk platform applies this search time range limit backwards ... See more...
Hi one option is use authorize.conf with the next values srchTimeWin = <integer> * Maximum time range, in seconds, of a search. * The Splunk platform applies this search time range limit backwards from the latest time specified for a search. * If a user has multiple roles with distinct search time range limits, or has roles that inherit from roles with distinct search time range limits, the Splunk platform applies the least restrictive search time range limits to the role. * For example, if user X has role A (srchTimeWin = 30s), role B (srchTimeWin = 60s), and role C (srchTimeWin = 3600s), user X gets a maximum search time range of 1 hour. * When set to '-1', the role does not have a search time range limit. This value can be overidden by the maximum search time range value of an inherited role. * When set to '0' (infinite), the role does not have a search time range limit. This value cannot be overidden by the maximum search time range value of an inherited role. * This setting does not apply to real-time searches. * Default: -1 srchTimeEarliest = <integer> * The earliest event time that can be searched, in seconds before the current wall clock time. * If a user is a member of a role with a 'srchTimeEarliest' limit, or a role that inherits from other roles with 'srchTimeEarliest' limits, the Splunk platform applies the least restrictive time limit from the roles to the user. * For example, if a user is a member of role A (srchTimeEarliest = 86400), and inherits role B (srchTimeEarliest = 3600) and role C (srchTimeEarliest = -1 (default)), the user gets an effective earliest time limit of 1 day (86400 seconds) ago. * When set to '-1', the role does not have a earliest time limit. This value can be overidden by the earliest time value of an inherited role. * When set to '0' (infinite), the role does not have an earliest time limit. This value cannot be overidden by the earliest time limit value of an inherited role. * This setting does not apply to real-time searches. * Default: -1 With those you can define earliest and also search span. Just create a separate role as you normal user an use it for these users which you want to apply this restrictions. r. Ismo 
Hi Maries, Please try with the below search <your curl command> -d search= "search index=<indexname> | stats count as field1 | eval field1="dallvcflwb110u,yes;dallvcflwb120u,yes" | eval field1=sp... See more...
Hi Maries, Please try with the below search <your curl command> -d search= "search index=<indexname> | stats count as field1 | eval field1="dallvcflwb110u,yes;dallvcflwb120u,yes" | eval field1=split(field1,";") | mvexpand field1 | rex field=field1 \"(?<host>.*),(?<mode>.*)\" | table host mode | outputlookup atlassian_maintenance.csv"
Hi Those should removed quite soon. Usually it depends on your environment somehow. Are you sure that you canceled and/or remove those jobs from all apps? For users it seems to show those only in c... See more...
Hi Those should removed quite soon. Usually it depends on your environment somehow. Are you sure that you canceled and/or remove those jobs from all apps? For users it seems to show those only in current app in default mode when you have press Activity tab. Just select all for App selection and see that all have removed. r. Ismo
Hi @billy , at first, don't attach a new question to another one especially when closed because it's more difficoult to have an answer, it's always better to open a new question, even if with the sa... See more...
Hi @billy , at first, don't attach a new question to another one especially when closed because it's more difficoult to have an answer, it's always better to open a new question, even if with the same topic, to have a surely faster and probably better answer. Anyway, in this way you block the Splunk monitoring and it isn't a good idea because you're blind on Splunk running. Why do you want this? the Splunk logs don't consume license and you can limit the storage consuption using a limited (e.g. 7 days) retention on these logs. Anyway, are you sure that you continue to receive these logs from that Forwarder? I say this becsue with the configuration you shared isn't possible to receive these logs from that Forwarder. Check if the logs you're receiving have that source (the ones in the monitor stanza header) and that host (the Forwarder where you changed the configuration. Ciao. Giuseppe
Hi please don't add your admin user + it's pass to any posts. Or actually write those on cmd as those are stored into history files and/or are seen on process list! Much better way is to read those ... See more...
Hi please don't add your admin user + it's pass to any posts. Or actually write those on cmd as those are stored into history files and/or are seen on process list! Much better way is to read those into variable and then use that on queries. You could do it like   read USERPASS admin:<your pass here> ^D curl -ku $USERPASS .....   Also don't add your real node name into examples! As you are in linux/*nix you could replace those outer " with ' and then it should work. Then you are not needs \" inside your SPL. curl -ku $USERPASS https://<your splunk SH>:<mgmt port>/servicesNS/admin/SRE/search/jobs/export -d search='| stats count as field1 | eval field1="dallvcflwb110u,yes;dallvcflwb120u,yes" | eval field1=split(field1,";") | mvexpand field1 | rex field=field1 "(?<host>.*),(?<mode>.*)" | table host mode | outputlookup atlassian_maintenance.csv' In windows this didn't work ;-( r. Ismo 
  Yes, for example this seach:   search index=* | eval ip="8.8.8.8" | search ip | stats count by index | eval result=if(count>0, "IP found", "IP not found")