All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Uma You just have to create a metric per Token and use a query like this SELECT toInt(expirationDateTime- eventTimestamp) AS "Seconds" which will give you the difference in seconds between the... See more...
Hi Uma You just have to create a metric per Token and use a query like this SELECT toInt(expirationDateTime- eventTimestamp) AS "Seconds" which will give you the difference in seconds between the dates, you can then further multiply the seconds to get minutes/hours or days if you want to rather use that. This will give you the metric to tell you how much seconds/minutes/hours/days to expiry and you can then alert on it Ciao
Hello @Mandar.Kadam , Can you share the solution you got from the support? Regards, Amit Singh Bisht
Thats the answer, thanks!
To accept the license during the start, execute: opt/splunkforwarder/splunk start --accept-license --answer-yes and before you start the forwarder service I suggest to create a user-seed.conf to se... See more...
To accept the license during the start, execute: opt/splunkforwarder/splunk start --accept-license --answer-yes and before you start the forwarder service I suggest to create a user-seed.conf to set the admin password in clear text on the CLI. user-seed.conf must be stored in /opt/splunkforwarder/etc/system/local/ [user_info] USERNAME = admin PASSWORD = YourPassword  another method is to hash the password and add the hash to the user-seed.conf. It is described in the following doc Create secure administrator credentials - Splunk Documentation
Thanks for the response Paul.. I removed the master_uri. I can understand why, it is now manager_uri see below: /opt/splunk/etc/system/local/server.conf [license] /opt/splunk/etc/system/local/serv... See more...
Thanks for the response Paul.. I removed the master_uri. I can understand why, it is now manager_uri see below: /opt/splunk/etc/system/local/server.conf [license] /opt/splunk/etc/system/local/server.conf active_group = Forwarder /opt/splunk/etc/system/default/server.conf connection_timeout = 30 /opt/splunk/etc/system/default/server.conf manager_uri = self /opt/splunk/etc/system/default/server.conf receive_timeout = 30 /opt/splunk/etc/system/default/server.conf report_interval = 1m /opt/splunk/etc/system/default/server.conf send_timeout = 30 /opt/splunk/etc/system/default/server.conf squash_threshold = 2000 /opt/splunk/etc/system/default/server.conf strict_pool_quota = true I did something else, and that is remove the heavy's from the distributed search peers. Why they where there i don't know. It resolved one thing, the warning about disabling the peer...   The only thing remaining is the duplicate license hash (ffffff...) in the _internal index. I can understand the hash itself. Every forwaredr with this license has this hash. What i don't understand is why this warnimng. And it is only the warning for the heavy's  which were in the distributed serach peers. Not the one's which were not in that list. It seems something remained  someweher and keeps looking to the license on these heavy's and keeps reporting it is the same license...   Any idea?
No matter how many times you ask in different posts, there currently isn't any easy way to do what you are asking for.
Hi @PickleRick, If I replace the TASKID column with UPDATED column to rising column method, will it make a difference?  FYI : I also increased the checkpoint value from 1 to 2 and even after the... See more...
Hi @PickleRick, If I replace the TASKID column with UPDATED column to rising column method, will it make a difference?  FYI : I also increased the checkpoint value from 1 to 2 and even after the second time STATUS change is RELEASED to FINISHED, that row is not ingested in splunk.
Hi Team, We are planning to perform a silent installation of the Splunk Universal Forwarder on a Linux client machine. So far, we have created a splunk user on the client machine, downloaded the .t... See more...
Hi Team, We are planning to perform a silent installation of the Splunk Universal Forwarder on a Linux client machine. So far, we have created a splunk user on the client machine, downloaded the .tgz forwarder package, and extracted it to the /opt directory. Currently, the folder /opt/splunkforwarder is created, and its contents are accessible. I have navigated to the /opt/splunkforwarder/bin directory, and now I want to execute a single command to: Agree to the license without prompts, and Set the admin username and password. I found a reference for a similar approach in Windows, where the following command is used: msiexec.exe /i splunkforwarder_x64.msi AGREETOLICENSE=yes SPLUNKUSERNAME=SplunkAdmin SPLUNKPASSWORD=Ch@ng3d! /quiet However, I couldn't find a single equivalent command for Linux that accomplishes all these steps together. Could you please provide the exact command to achieve this on Linux?  
Most probably your DB query initially returned one status which got ingested from the input but later something within your DB changed the status. But since the TASKID is the primary identifier for t... See more...
Most probably your DB query initially returned one status which got ingested from the input but later something within your DB changed the status. But since the TASKID is the primary identifier for the ingested records, the same TASKID will not be ingested again. Hence the discrepancy between the DB contents and the indexed data.
Ah, so your problem was actually _not_ the same as the original one. That's why there is rarely a point to digging out old threads
thanks but this colors the background of the cell i need to color the font only
Is someone can support me on this topic ?
Thank you for your reply. I've solved the problem. It is related to the wrong definition of my Macro "cim_Network_Resolution_indexes"
And do you have acceleration enabled on this datamodel? The summariesonly=true option tells Splunk to only use accelerated summaries for searching, not the raw events.
No. It will not work. strftime() renders the time to a string. Leave the INGEST_EVAL alone But seriously, you'd have to get that value using json_extract, text functions or earlier transform extr... See more...
No. It will not work. strftime() renders the time to a string. Leave the INGEST_EVAL alone But seriously, you'd have to get that value using json_extract, text functions or earlier transform extracting an indexed field. Ugly.
Hi @PickleRick, Just to inform you. I am using below props setting in my Prod env. But still I can see discrepancies.  DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true categ... See more...
Hi @PickleRick, Just to inform you. I am using below props setting in my Prod env. But still I can see discrepancies.  DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = 1 TIME_PREFIX = \"@timestamp\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z  
You can do | tstats max(_time) as latest where index=* by index but depending on the time range you use for the search it will only return data for those indexes that have data during that timespan... See more...
You can do | tstats max(_time) as latest where index=* by index but depending on the time range you use for the search it will only return data for those indexes that have data during that timespan You can also do  | rest /services/data/indexes count=0 | table title maxTime minTime  
Is there any other way outside of using syslog? either way thanks for all of the inputs @PickleRick @gcusello. Just an exercise on my part. 
Did you actually read the docs? Especially those for the JDBC module?
1. If you just use the same time format it makes no sense to moving your extraction to INGEST_EVAL 2. During index time you don't have search-time extracted fields so you'd have to get the contents ... See more...
1. If you just use the same time format it makes no sense to moving your extraction to INGEST_EVAL 2. During index time you don't have search-time extracted fields so you'd have to get the contents of that field manually. But indeed as @bowesmana demonstrated (and I confirmed in my lab with %3N as well), you can just do %9N and it will still get only 6 digits and ignore the seventh one.