All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @jeradb, let me understand: yo want to filter results from the datamodel using the lookup, is it correct? In this case: | from datamodel:Remote_Access_Authentication.local | search [| inputlook... See more...
Hi @jeradb, let me understand: yo want to filter results from the datamodel using the lookup, is it correct? In this case: | from datamodel:Remote_Access_Authentication.local | search [| inputlookup Domain | rename name AS company_domain | fields company_domain] | ... only one attention point: check if the field in the DataModel is named "company_domain" or "Remote_Access_Authentication.company_domain". If the second, you have to rename it in the subsearch. what do you want to extract from the DataModel? maybe you could use tstats. Ciao. Giuseppe
1. I already searched under the index settings and all of my index files are not occupying more than 3 Go of space.  2. I'm also aware that you can't determine what configuration I have, what is ins... See more...
1. I already searched under the index settings and all of my index files are not occupying more than 3 Go of space.  2. I'm also aware that you can't determine what configuration I have, what is installed on it etc. I'm just asking for some complementary informations about optimizing my parameters, maybe reducing tsidx files, because I think I'm forgetting something and I know I may not be aware of the best practice...  Kind regards
I am trying to install credential package to Splunk universal forwarder. Need help with few queries as below. When I am downloading the package from splunk cloud platform Apps--> Universal forward... See more...
I am trying to install credential package to Splunk universal forwarder. Need help with few queries as below. When I am downloading the package from splunk cloud platform Apps--> Universal forwarder -->download UF cred. The package is getting downloaded to my local machine but I am unable to locate the downloaded package in  my machine. please assist me where can I find the downloaded credential package
Hi, Would you mind to help on this?, I have been working for days to figure out how can I pass a lookup file subsearch as "like" condition in main search, something like: To examples: 1)  . ... See more...
Hi, Would you mind to help on this?, I have been working for days to figure out how can I pass a lookup file subsearch as "like" condition in main search, something like: To examples: 1)  . . main search| where like(onerowevent, "%".[search [| inputlookup blabla.csv| <whatever_condition_to_make_onecompare_field>|table onecompare }]."%"]]) 2) . . main search| eval onerowevent=if(like(onerowevent,, "%".[search [| inputlookup blabla.csv| <whatever_condition_to_make_onecompare_field>|table onecompare }]."%"]])),onerowevent,"")
Hi Splunkers,   I dont need the value in first line and need that value later in search to filter, so I tried tis way to skip the value dmz type IN (if($machine$=="DMZ",true,$machine$) ... See more...
Hi Splunkers,   I dont need the value in first line and need that value later in search to filter, so I tried tis way to skip the value dmz type IN (if($machine$=="DMZ",true,$machine$) Is that will work? Thanks in Advance!
My current serach is -    | from datamodel:Remote_Access_Authentication.local | append [| inputlookup Domain | rename name as company_domain] | dest_nt_domain   How do I get the search to only li... See more...
My current serach is -    | from datamodel:Remote_Access_Authentication.local | append [| inputlookup Domain | rename name as company_domain] | dest_nt_domain   How do I get the search to only list items in my table where | search dest_nt_domain=company_domain?  Is there another command other than append that I can use with inputlookup?  I do not need to add it to the list.   Just trying to get the data in to compare against the datamodel. 
Assuming a single event, try something like this | spath | fields - _raw | transpose 0 column_name=field | eval group=mvindex(split(field,"."),1) | eval year=mvindex(split(field,"."),-2) | where yea... See more...
Assuming a single event, try something like this | spath | fields - _raw | transpose 0 column_name=field | eval group=mvindex(split(field,"."),1) | eval year=mvindex(split(field,"."),-2) | where year=2014 or year=2015 | eval key=mvindex(split(field,"."),-1) | eval {key}='row 1' | fields - "row 1" key field | stats values(*) as * by group year
SEDCMD applies at index time and only to new events.
Hi , It would look as below , for either of Grains or Beverages :   Lets say for Beverages    year type prod rate 2014 pepsi 50 60 2015 coke 55 30   Similar tabular repres... See more...
Hi , It would look as below , for either of Grains or Beverages :   Lets say for Beverages    year type prod rate 2014 pepsi 50 60 2015 coke 55 30   Similar tabular representation will be applicable for Grains( in a separate table of course).    Hope my answer is clear.  Please let me know else will try to explain further. Thanks
Hi @anandhalagaras1  You can try this query: | rest /services/licenser/pools | eval total_quota_gb = toint(usage_quota / 1024 / 1024 / 1024) | eval used_gb = toint(usage_used / 1024 / 1024 / 1024... See more...
Hi @anandhalagaras1  You can try this query: | rest /services/licenser/pools | eval total_quota_gb = toint(usage_quota / 1024 / 1024 / 1024) | eval used_gb = toint(usage_used / 1024 / 1024 / 1024) | eval usage_percentage = round((used_gb / total_quota_gb) * 100, 2) | table total_quota_gb, used_gb, usage_percentage | where usage_percentage >= 70 AND usage_percentage < 80 | eval alert_level = "70%-79%" | eval alert_message = "License usage has reached " . usage_percentage . "%. Please take action." | if(usage_percentage >= 80 AND usage_percentage < 90, appendpipe [| eval alert_level = "80%-89%"; eval alert_message = "License usage has reached " . usage_percentage . "%. Please take immediate action."], "") | if(usage_percentage >= 90, appendpipe [| eval alert_level = "90% and above"; eval alert_message = "License usage has crossed critical threshold at " . usage_percentage . "%. Immediate attention required!"], "") | table alert_level, alert_message
@olivier_guisneu  Did you reach out to splunk support? I am facing similar issue.
Hi @anandhalagaras1, in the Monitoring Console there's the alert you require, it's named: "DMC Alert - Total License Usage Near Daily Quota". you can find it at http://your_splunk_server:8000/en-US... See more...
Hi @anandhalagaras1, in the Monitoring Console there's the alert you require, it's named: "DMC Alert - Total License Usage Near Daily Quota". you can find it at http://your_splunk_server:8000/en-US/app/splunk_monitoring_console/alerts Ciao. Giuseppe
Hi ,    I have a JSON object of following type :   {  "time": "14040404.550055", "Food_24ww": {      "Grains" : {               "status" : "OK",              "report": {                   "... See more...
Hi ,    I have a JSON object of following type :   {  "time": "14040404.550055", "Food_24ww": {      "Grains" : {               "status" : "OK",              "report": {                   "2014": {                           "type" :"rice",                           "prod" : "50",                           "rate"  : "30"                   },                "2015": {                        "type": "pulses",                        "prod" : "50",                       "rate"  : "30"                }       } },    "Beverages" : {           "status": "Good",        "2014": {            "type" :"pepsi",           "prod" : "50",           "rate"  : "60"         },      "2015": {          "type": "coke",          "prod" : "55",          "rate"  : "30"       }    }  } }   I want to extract all the key values inside "report" key for "Grains" and "Beverages". Means , for Grains , I want 2014 (and key values inside it), 2015 (and key values inside it) , similarly for Beverages.   Now the challenge is none of the JSON keys until "reports" are constant.  The first key "Food_24ww" and the next level "Grains" and "Beverages" are not constant.    Thanks
Hello. I am a Splunk newbie. I have a question about the replication factor in searchhead clustering. Looking at the docs it says that search artifacts are only replicated for scheduled saved sea... See more...
Hello. I am a Splunk newbie. I have a question about the replication factor in searchhead clustering. Looking at the docs it says that search artifacts are only replicated for scheduled saved searches. https://docs.splunk.com/Documentation/Splunk/9.1.2/DistSearch/ChooseSHCreplicationfactor   I'm curious as to the reason and advantage of duplicating search artifacts only in this case. And, then, in the case of real-time search, is it correct that search artifacts are not replicated and only remain on the local server? In that case, in a clustering environment, member 2 should not be able to see the search results of member 1. But I can view it by using the loadjob command in member2. Then, wouldn’t it be possible to view real-time search artifacts as well? Thank you
yes, the _time is not the time of change, I noticed it too. But overall the code reports summarized all changes per id: | table _time id changed The first data point is at 10:20:30, so the reported... See more...
yes, the _time is not the time of change, I noticed it too. But overall the code reports summarized all changes per id: | table _time id changed The first data point is at 10:20:30, so the reported change at 10:20:56 is correct. I would be very interested in a solution involving "running changes". BTW never heard about the "autoregress" command, thx!
Hello Team, We have deployed machine agent as an  side car(different container within a pod) for  apache in OSE. It's working for most of the pod but for one pod we are getting below error. code-ex... See more...
Hello Team, We have deployed machine agent as an  side car(different container within a pod) for  apache in OSE. It's working for most of the pod but for one pod we are getting below error. code-external-site-ui-sit-50-gm9np==> [system-thread-0] 23 Jan 2024 08:22:14,654 DEBUG RegistrationTask - Encountered error during registration. com.appdynamics.voltron.rest.client.NonRestException: Method: SimMachinesAgentService#registerMachine(SimMachineMinimalDto) - Result: 401 Unauthorized - content:   at com.appdynamics.voltron.rest.client.VoltronErrorDecoder.decode(VoltronErrorDecoder.java:62) ~[rest-client-1.1.0.245.jar:?] at feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:156) ~[feign-core-10.7.4.jar:?] at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:80) ~[feign-core-10.7.4.jar:?] at feign.ReflectiveFeign$FeignInvocationHandler.invoke(ReflectiveFeign.java:100) ~[feign-core-10.7.4.jar:?] at com.sun.proxy.$Proxy114.registerMachine(Unknown Source) ~[?:?] at com.appdynamics.agent.sim.registration.RegistrationTask.run(RegistrationTask.java:147) [machineagent.jar:Machine Agent v23.9.1.3731 GA compatible with 4.4.1.0 Build Date 2023-09-20 05:14:38] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) [?:?] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] code-external-site-ui-sit-50-gm9np==> [system-thread-0] 23 Jan 2024 08:22:17,189 DEBUG GlobalTagsConfigsDecider - Global tags enabled: false code-external-site-ui-sit-50-gm9np==> [system-thread-0] 23 Jan 2024 08:22:17,189 DEBUG RegistrationTask - Running registration task code-external-site-ui-sit-50-gm9np==> [system-thread-0] 23 Jan 2024 08:22:17,256  WARN RegistrationTask - Encountered error during registration. Will retry in 60 seconds. code-external-site-ui-sit-50-gm9np==> [system-thread-0] 23 Jan 2024 08:22:17,256 DEBUG RegistrationTask - Encountered error during registration.   We have cross-verified and everything looks good from the configuration end.    Kindly help us with your suggestions.
WAF and firewall are typically _not_ solutions associated with email traffic or user's web-related behaviour so you might want to reconsider your sources list.
Hi Team, We have opted for 250 GB of licensing on daily basis.  So if the license is reaching more than 70% (i.e. 175 GB) i need to get an alert similarly if the license is getting reached 80% and m... See more...
Hi Team, We have opted for 250 GB of licensing on daily basis.  So if the license is reaching more than 70% (i.e. 175 GB) i need to get an alert similarly if the license is getting reached 80% and more (i.e. 200 GB) then i need to get another alert. And finally if it crossed more than 90% (i.e. 225 GB) i need to get another alert.   So can you help me with the Search query.
1. For beginners use, it's _probably_ ok to have relatively short retention. If you have sources that contiously supply your environment with evetns. Otherwise you might want to keep your data for lo... See more...
1. For beginners use, it's _probably_ ok to have relatively short retention. If you have sources that contiously supply your environment with evetns. Otherwise you might want to keep your data for longer in case you want your users to have some material to search from. Noone can tell you what your case is. If it's an all-in-one server (which I assume it is) the index settings are in the settings menu under... surprise, surprise, "Indexes". 2. That's completely out of scope of Splunk administration as such and is a case for your Linux admins. We have no idea what is installed on your server, what is its configuration and why it is configured this way. So we can tell you that "you have way too much here for the server's size" and you have to deal with it. And yes, removing random directories is not a very good practice.