All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I tried that too, but in that I am getting no results.
Hi @jbv , you can use the now() funtion, for more infos see at https://docs.splunk.com/Documentation/SCS/current/SearchReference/DateandTimeFunctions You can try something like this: | makeresults... See more...
Hi @jbv , you can use the now() funtion, for more infos see at https://docs.splunk.com/Documentation/SCS/current/SearchReference/DateandTimeFunctions You can try something like this: | makeresults | eval current_time=now() | table current_time it's in epochtime, then you can convert in the format you like. Ciao. Giuseppe
Hello @jbv, You can get the current time using the now() function. By default it is returned in epoch  format only. You can use eval to call a field and you'll get what you want. | eval current_tim... See more...
Hello @jbv, You can get the current time using the now() function. By default it is returned in epoch  format only. You can use eval to call a field and you'll get what you want. | eval current_time_epoch=now()   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated.!! 
Hi, Tre this : | inputlookup yourlookuo // Read data from the lookup file | search NOT $empty$ trigger_email=true // Filter for records with email trigger enabled | eval email_subject = "<field_Mo... See more...
Hi, Tre this : | inputlookup yourlookuo // Read data from the lookup file | search NOT $empty$ trigger_email=true // Filter for records with email trigger enabled | eval email_subject = "<field_MotherYear> - <field_Customer> - <field_Device>- <field_CheckName> - <field_SelfHealCount>-<field_Status>- <field_Timestamp>" // Construct subject using all fields subject = $email_subject // Use the dynamically generated subject
There are a couple of functions to returns time, now() which returns the time the search started, and time() which returns the time the function is executed. Both of these functions already return th... See more...
There are a couple of functions to returns time, now() which returns the time the search started, and time() which returns the time the function is executed. Both of these functions already return the time in epoch format.
It is not clear what your actual requirement is - Which avg are you want to compare to? The average VALUE for that time period (15m) across all cpus, or the average for that cpu across the whole time... See more...
It is not clear what your actual requirement is - Which avg are you want to compare to? The average VALUE for that time period (15m) across all cpus, or the average for that cpu across the whole time period? Assuming the former, a "standard" way of looking for a "fiddle factor" is to determine the standard deviation (for the VALUEs in the time period - 15m), and determine for each cpu how many stdevs the VALUE is above the mean. You might do this like this | eventstats mean(VALUE) as MeanV stdev(VALUE) as StDevV by _time | eval exceedFactor=if(VALUE > MeanV,(VALUE - MeanV)/StDevV, 0) | timechart values(exceedFactor) span=15m by cpu limit=0
no, that not right putting  the cpu into by clause for the stats command doesn't   give the mean value for cluster Its performing the stats  on the individual cpu's
Hi, Is there a way to get current time on Splunk and then convert it to epoch? Im trying to create a dashboard to show inactivity from my data sources and plan to use info from | metadata command.
Hi @sarlacc , at first it isn't a good idea to have the Deployment server on Indexers or Search Heads, have you another server? You can use a shared (with other roles) server only if the DS has to ... See more...
Hi @sarlacc , at first it isn't a good idea to have the Deployment server on Indexers or Search Heads, have you another server? You can use a shared (with other roles) server only if the DS has to manafe up to 50 clients, more it requires a dedicated server. About data, at first check the TIME_FORMAT: what's the format of your date: european (dd/mm/yyyy) or american (mm/dd/yyyy), by default Splunk uses the american format. About the only Universal Forwarder with issues, have you internal Splunk logs (_* indexes)? if yes, it's an issue of that data source, if not there's a connection issue. Ciao. Giuseppe
Hi @abhishekpatel2, try adding the BY clause: | tstats summariesonly=false count FROM datamodel=Cisco_Catalyst_App WHERE where nodename=Cisco_Catalyst_Dataset.Cisco_Security_Adviso... See more...
Hi @abhishekpatel2, try adding the BY clause: | tstats summariesonly=false count FROM datamodel=Cisco_Catalyst_App WHERE where nodename=Cisco_Catalyst_Dataset.Cisco_Security_Advisories_Events BY Catalyst_Dataset.Security_Advisories_Events.Category | table Catalyst_Dataset.Security_Advisories_Events.Category Ciao. Giuseppe
Hi @jhuysing , I don't know which data are you monitoring, but anyway, youcan add the CPU name to the stats BY clause. Then you can create your own rule to fire an alert: e.g. max value more than 3... See more...
Hi @jhuysing , I don't know which data are you monitoring, but anyway, youcan add the CPU name to the stats BY clause. Then you can create your own rule to fire an alert: e.g. max value more than 30% of the average, etc... using a where condition. in your case (using 30% more than MeanV): <your_search> | bin span=15m _time | stats max(VALUE) AS MaxV mean(VALUE) AS MeanV range(VALUE) AS Delta BY _time CPU | where MaxV>MeanV*1.3 If you use _time in the stats command, remember to add the bin command before. Ciao. Giuseppe
Hi @orendado , you can tag your data using tags and eventtypes (https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/Abouteventtypes) but maintaining the sourcetype of each data source... See more...
Hi @orendado , you can tag your data using tags and eventtypes (https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/Abouteventtypes) but maintaining the sourcetype of each data source, in this way you have all the parsing rules up and running. I usually define a sorcetype for each type of data, eventually cloning an existing one: e.g. if I have a custom data source in csv format, I'll clone it from the standard csv and I call it "my_sourcetype" (or the name you like). In this way I have all the parsing rules of the csv, eventually adding other specific, and I recognize those logs also by sourcetype. Remember that this is useful only for custom data sources, if you have standard data sources (e.g. Fortinet or Cisco or Checkpoint), it's always better to use the sourcetypes in the add-ons from Splunkbase. This is relevant also because it isn't sufficient to parse the data, but it's also important to normalize data to use them in apps as Enterprise Security. In addition, in these add-ons tags and eventtypes are already defined. Ciao. Giuseppe
We have datamodel which has 2 level DataSet(Datamodel-> Parent Dataset -> Child Dataset). We have defiend a field in Child Dataset and we are able to see that field value on preview.  Datamodel: Cat... See more...
We have datamodel which has 2 level DataSet(Datamodel-> Parent Dataset -> Child Dataset). We have defiend a field in Child Dataset and we are able to see that field value on preview.  Datamodel: Catalyst_App Parent Dataset: Catalyst_Dataset Child Dataset: Security_Advisories_Events Field: Category So when we are trying to run the following tstats query: | tstats summariesonly=false values(Catalyst_Dataset.Security_Advisories_Events.Category) from datamodel=Catalyst_App where nodename=Catalyst_Dataset.Security_Advisories_Events We are getting no results. But at the same time when we run the following datamodel query: | datamodel Catalyst_App Security_Advisories_Events search | fillnull value="-" | table Catalyst_Dataset.Security_Advisories_Events.Category We are getting category values in datamodel query.  
I can create a query and produce a time chart so I can see the load across the set of cpu   |timechart values(VALUE) span=15m by cpu limit=0      I can see a trend that one cpu has a higher lo... See more...
I can create a query and produce a time chart so I can see the load across the set of cpu   |timechart values(VALUE) span=15m by cpu limit=0      I can see a trend that one cpu has a higher loader I can also create a query using the stats to get the avg/Max/Range  of the load value   stats max(VALUE) as MaxV, mean(VALUE) as MeanV, range(VALUE) as Delta by _time     What I want to do is identify any CPU  that's running a higher load than avg plus some sort of fiddle factor?
Thank you for your responses @tscroggins @PickleRick 
Looking for ways to refresh client list that phone homes to Deployment server without restarting splunk service or taking access of server. We have few sources onboarded that recycles their instance... See more...
Looking for ways to refresh client list that phone homes to Deployment server without restarting splunk service or taking access of server. We have few sources onboarded that recycles their instances every 24 hours, within few days the count of clients becomes 4 times our usual number and unless done something DS will become slower, the only way to reset this list seems to be via splunk restart which we want to avoid. Anyone face something similar?
Hi @Pratyush, yes, Azure team has assigned a user to the groups they have created, and we have mapped that group with Splunk
@inventsekar @deepakc I have attached below screenshot and its showing the correct port opened and listening perfectly. Please validate at once. ON Indexer On UF On indexer On UF ... See more...
@inventsekar @deepakc I have attached below screenshot and its showing the correct port opened and listening perfectly. Please validate at once. ON Indexer On UF On indexer On UF  
Use this | spath input=payload | rename cacheStats.lds:UiApi.getRecord.* as * with or without the rename, but unless you rename, remember you need to wrap those fields in single quotes if you want ... See more...
Use this | spath input=payload | rename cacheStats.lds:UiApi.getRecord.* as * with or without the rename, but unless you rename, remember you need to wrap those fields in single quotes if you want to use them in subsequent eval statements (right hand side)  
I'm running Splunk Enterprise 9.1.1.  It is a relatively fresh installation (done this year).  Splunk forwarders are also using version 9.1.1 of the agent. The indexer is also the deployment server.... See more...
I'm running Splunk Enterprise 9.1.1.  It is a relatively fresh installation (done this year).  Splunk forwarders are also using version 9.1.1 of the agent. The indexer is also the deployment server.  Beyond that, I only have forwarders forwarding to it.  I have one Linux host (Redhat 8.9) with this problem.  I've deployed Splunk_TA_nix and enabled rlog.sh to show info from /var/log/audit/audit.log. Using today as an example (06/05/2024), I don't see entries for 06/05/2024.  But I do see logs from today under 05/06/2024. Example from the splunk search page: index="linux_hosts" host=bad_host          (last 30 days) 05/06/2024 at left side of events     audit data...........(06/05/2024 14:32:12) audit data......... As I mentioned above, I have one deployment server.   All forwarders are using the same/centralized.   Small environment, I'd say ~25 linux hosts (redhat 7 and 8).  This is the only Redhat 8 with this problem. Tried reinstalling splunk forwarder (completely deleted /path/to/splunkforwarder) once I uninstalled it. I knowa little about using props.conf with TIME_FORMAT and have not done so.  My logic is if I needed it, I'd see this on all forwarders not just the one i have with this problem. I did localectl and it shows en_US.  ausearch -i (same thing rlog.sh does) shows the dates/times as I'd expect.  Anything else I should look for  from the OS perspective?  Any suggestions on what I could do from splunk?  Also, noticed that when I go to the _internal index, dates/times are consistent.  When I use my normal index (linux_hosts) this is my one RH8 that has this problem.  Other Redhat 8 are what I'd expect. A side note here: someone else suspected this host wasn't logging.  So they did a manual import of the audit.log files.  Mind you, the dates in the file were not parsed since they didn't go through rlog.sh (ausearch -i) first.  Could this also be part of the problem?  If so, how can I undo what was done?   Thanks!