All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It should be in your browser's downloads folder.  Type CTRL-J to see the files your browser downloaded.  Right-click on a file and click "Show in folder" to see where it is.
Real-time searches are eeeeeevil and generally should not be used at all. Also there's not much to replicate in case of a real-time search since... they occur real-time and if you tried to run it ano... See more...
Real-time searches are eeeeeevil and generally should not be used at all. Also there's not much to replicate in case of a real-time search since... they occur real-time and if you tried to run it another time you'd be running it with another set of data. But if you meant ad-hoc search - I think the assumption is that ad-hoc search are used interactively so that you're not probably gonna need the results in another session, called with loadjob by another person logged in to another SHC member. With scheduled search it's different because a quite common way to optimize load is to schedule separate searches asynchronously so that one search uses the results from the already-performed search. So it's simply that in some cases it seems to make much more sense than in others.
@gcusello  Thank you so much for your quick response. The link you sent to me is not working and getting blank screen. When you get a chance, please send the name of the Add On, so I can search for ... See more...
@gcusello  Thank you so much for your quick response. The link you sent to me is not working and getting blank screen. When you get a chance, please send the name of the Add On, so I can search for it in Splunk base. Thank you again!
Result should get common in both databases and also unique/rest values from database2. Please help me with query. Databse1 Database2 Result A A A B B B C C C D  E E E F ... See more...
Result should get common in both databases and also unique/rest values from database2. Please help me with query. Databse1 Database2 Result A A A B B B C C C D  E E E F F   G G   H H
What do you mean by "the field abcd is already extracted"? Remember that most of the fields you work with in Splunk are so called "search-time" extractions which means that they are extracted dynami... See more...
What do you mean by "the field abcd is already extracted"? Remember that most of the fields you work with in Splunk are so called "search-time" extractions which means that they are extracted dynamically when you are searching and displaying the data while SEDCMD works in so called "index-time" which means _before_ the data is written to Splunk's indexes. SEDCMD as @richgalloway pointed out does not know anything about the search-time extracted fields so you can't rely on their values. SEDCMD is a regex-based text substitution which works on the _raw data. There is no concept of field here whatsoever.
It's not that easy because you have several independent data points in the same event. To complicate the case further, you don't even have your "categories" and those datapoints as entries in an arr... See more...
It's not that easy because you have several independent data points in the same event. To complicate the case further, you don't even have your "categories" and those datapoints as entries in an array, but instead as separate members of the object (Beverages and Grains and members further down the object). So you can't use the typical "spath and mvexpand" approach because you have nothing to mvexpand and you don't know what to spath by in the first place. With just one such json, you can indeed transpose the whole event and treat each field as separate event as @ITWhisperer showed. But with more events like this it's gonna get complicated. instead of transposing you could try to use foreach to "combine" values from separate fields into "composite fields" and then mvexpand and split them into single fields but it's very very ugly and probably not very efficient. Can't you get your data in some decent format in the first place?
Hi @jeradb, let me understand: yo want to filter results from the datamodel using the lookup, is it correct? In this case: | from datamodel:Remote_Access_Authentication.local | search [| inputlook... See more...
Hi @jeradb, let me understand: yo want to filter results from the datamodel using the lookup, is it correct? In this case: | from datamodel:Remote_Access_Authentication.local | search [| inputlookup Domain | rename name AS company_domain | fields company_domain] | ... only one attention point: check if the field in the DataModel is named "company_domain" or "Remote_Access_Authentication.company_domain". If the second, you have to rename it in the subsearch. what do you want to extract from the DataModel? maybe you could use tstats. Ciao. Giuseppe
1. I already searched under the index settings and all of my index files are not occupying more than 3 Go of space.  2. I'm also aware that you can't determine what configuration I have, what is ins... See more...
1. I already searched under the index settings and all of my index files are not occupying more than 3 Go of space.  2. I'm also aware that you can't determine what configuration I have, what is installed on it etc. I'm just asking for some complementary informations about optimizing my parameters, maybe reducing tsidx files, because I think I'm forgetting something and I know I may not be aware of the best practice...  Kind regards
I am trying to install credential package to Splunk universal forwarder. Need help with few queries as below. When I am downloading the package from splunk cloud platform Apps--> Universal forward... See more...
I am trying to install credential package to Splunk universal forwarder. Need help with few queries as below. When I am downloading the package from splunk cloud platform Apps--> Universal forwarder -->download UF cred. The package is getting downloaded to my local machine but I am unable to locate the downloaded package in  my machine. please assist me where can I find the downloaded credential package
Hi, Would you mind to help on this?, I have been working for days to figure out how can I pass a lookup file subsearch as "like" condition in main search, something like: To examples: 1)  . ... See more...
Hi, Would you mind to help on this?, I have been working for days to figure out how can I pass a lookup file subsearch as "like" condition in main search, something like: To examples: 1)  . . main search| where like(onerowevent, "%".[search [| inputlookup blabla.csv| <whatever_condition_to_make_onecompare_field>|table onecompare }]."%"]]) 2) . . main search| eval onerowevent=if(like(onerowevent,, "%".[search [| inputlookup blabla.csv| <whatever_condition_to_make_onecompare_field>|table onecompare }]."%"]])),onerowevent,"")
Hi Splunkers,   I dont need the value in first line and need that value later in search to filter, so I tried tis way to skip the value dmz type IN (if($machine$=="DMZ",true,$machine$) ... See more...
Hi Splunkers,   I dont need the value in first line and need that value later in search to filter, so I tried tis way to skip the value dmz type IN (if($machine$=="DMZ",true,$machine$) Is that will work? Thanks in Advance!
My current serach is -    | from datamodel:Remote_Access_Authentication.local | append [| inputlookup Domain | rename name as company_domain] | dest_nt_domain   How do I get the search to only li... See more...
My current serach is -    | from datamodel:Remote_Access_Authentication.local | append [| inputlookup Domain | rename name as company_domain] | dest_nt_domain   How do I get the search to only list items in my table where | search dest_nt_domain=company_domain?  Is there another command other than append that I can use with inputlookup?  I do not need to add it to the list.   Just trying to get the data in to compare against the datamodel. 
Assuming a single event, try something like this | spath | fields - _raw | transpose 0 column_name=field | eval group=mvindex(split(field,"."),1) | eval year=mvindex(split(field,"."),-2) | where yea... See more...
Assuming a single event, try something like this | spath | fields - _raw | transpose 0 column_name=field | eval group=mvindex(split(field,"."),1) | eval year=mvindex(split(field,"."),-2) | where year=2014 or year=2015 | eval key=mvindex(split(field,"."),-1) | eval {key}='row 1' | fields - "row 1" key field | stats values(*) as * by group year
SEDCMD applies at index time and only to new events.
Hi , It would look as below , for either of Grains or Beverages :   Lets say for Beverages    year type prod rate 2014 pepsi 50 60 2015 coke 55 30   Similar tabular repres... See more...
Hi , It would look as below , for either of Grains or Beverages :   Lets say for Beverages    year type prod rate 2014 pepsi 50 60 2015 coke 55 30   Similar tabular representation will be applicable for Grains( in a separate table of course).    Hope my answer is clear.  Please let me know else will try to explain further. Thanks
Hi @anandhalagaras1  You can try this query: | rest /services/licenser/pools | eval total_quota_gb = toint(usage_quota / 1024 / 1024 / 1024) | eval used_gb = toint(usage_used / 1024 / 1024 / 1024... See more...
Hi @anandhalagaras1  You can try this query: | rest /services/licenser/pools | eval total_quota_gb = toint(usage_quota / 1024 / 1024 / 1024) | eval used_gb = toint(usage_used / 1024 / 1024 / 1024) | eval usage_percentage = round((used_gb / total_quota_gb) * 100, 2) | table total_quota_gb, used_gb, usage_percentage | where usage_percentage >= 70 AND usage_percentage < 80 | eval alert_level = "70%-79%" | eval alert_message = "License usage has reached " . usage_percentage . "%. Please take action." | if(usage_percentage >= 80 AND usage_percentage < 90, appendpipe [| eval alert_level = "80%-89%"; eval alert_message = "License usage has reached " . usage_percentage . "%. Please take immediate action."], "") | if(usage_percentage >= 90, appendpipe [| eval alert_level = "90% and above"; eval alert_message = "License usage has crossed critical threshold at " . usage_percentage . "%. Immediate attention required!"], "") | table alert_level, alert_message
@olivier_guisneu  Did you reach out to splunk support? I am facing similar issue.
Hi @anandhalagaras1, in the Monitoring Console there's the alert you require, it's named: "DMC Alert - Total License Usage Near Daily Quota". you can find it at http://your_splunk_server:8000/en-US... See more...
Hi @anandhalagaras1, in the Monitoring Console there's the alert you require, it's named: "DMC Alert - Total License Usage Near Daily Quota". you can find it at http://your_splunk_server:8000/en-US/app/splunk_monitoring_console/alerts Ciao. Giuseppe
Hi ,    I have a JSON object of following type :   {  "time": "14040404.550055", "Food_24ww": {      "Grains" : {               "status" : "OK",              "report": {                   "... See more...
Hi ,    I have a JSON object of following type :   {  "time": "14040404.550055", "Food_24ww": {      "Grains" : {               "status" : "OK",              "report": {                   "2014": {                           "type" :"rice",                           "prod" : "50",                           "rate"  : "30"                   },                "2015": {                        "type": "pulses",                        "prod" : "50",                       "rate"  : "30"                }       } },    "Beverages" : {           "status": "Good",        "2014": {            "type" :"pepsi",           "prod" : "50",           "rate"  : "60"         },      "2015": {          "type": "coke",          "prod" : "55",          "rate"  : "30"       }    }  } }   I want to extract all the key values inside "report" key for "Grains" and "Beverages". Means , for Grains , I want 2014 (and key values inside it), 2015 (and key values inside it) , similarly for Beverages.   Now the challenge is none of the JSON keys until "reports" are constant.  The first key "Food_24ww" and the next level "Grains" and "Beverages" are not constant.    Thanks