All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am trying to determine a hosts percent of time it logs to splunk within a summary index we created. We have an index called "summary_index" and a field called "host_reported" that shows if a host h... See more...
I am trying to determine a hosts percent of time it logs to splunk within a summary index we created. We have an index called "summary_index" and a field called "host_reported" that shows if a host has been seen in the past hour.  Here is the search i am using to see all hosts in the summary index that were seen within the last 24hrs: index=summary_index | stats count by host_reported What i am trying to do is develop a search that shows me what percent of the time over the past 7 days each host has reported to this summary index. So for example if host A only reported to the summary index 6 of the 7 days, i want it to show it's "up time "was 86% for the past 7 days. 
Yes indeed it does solve the issue, but now there's a new issue Streamed search execute failed because: Error in 'rex' command: regex="(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<threa... See more...
Yes indeed it does solve the issue, but now there's a new issue Streamed search execute failed because: Error in 'rex' command: regex="(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<thread>\S*)\,logger=(?<logger>\S*)\,\{\}\,(?<logmsg>(.)*)" has exceeded the configured depth_limit, consider raising the value in limits.conf..
Hello Luiz, In Dashboard, you would see the option on left panel- Interpolate data gaps. Kindly select that. 
Splunk support concluded it was an "as yet discovered software bug"
I think I've read this in its entirety 4 times now over the past week.  I am having difficulty understanding what the problem is.  Let me walk through it and see if writing it down helps... You work... See more...
I think I've read this in its entirety 4 times now over the past week.  I am having difficulty understanding what the problem is.  Let me walk through it and see if writing it down helps... You work in IST which is +10.5 hours from CST/DST. You have alert, which the cron schedule says to fire at 1 PM (13:00) in CDT.  That's 11:30 PM (23:30) IST.  You maybe mistyped "11:00 PM" for that, and maybe that's the issue? Disregarding the 11:00/11:30 issue, the second thing I think you mentioned is that the alert didn't actually come until 11:44, which is a 14 minute delay.   The search itself is a lightweight, it should run practically instantly and run-time shouldn't be an issue.  The most obvious reason for the 14 minute delay is because your server is too busy at 1 PM CDT to get this out any faster.  You should check into that - there's a lot of resources available inside Splunk to see what might be going on, but my guess is just that it's a busy time of the day, coupled with possibly too many "heavy" searches that trigger then.  You could also increase the priority of that search, though this doesn't address the core problem and may actually make things *worse* and not better.  I mean, maybe better for this one search, and being so fast that's probably OK, but still, it's just trying to hide the bigger problem.   Anyway, hope that helps and happy Splunking! -Rich  
You don't need the = after the rex   | rex "(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<thread>\S*)\,logger=(?<logger>\S*)\,\{\}\,(?<logmsg>.*)" Updated to remove brackets in the log... See more...
You don't need the = after the rex   | rex "(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<thread>\S*)\,logger=(?<logger>\S*)\,\{\}\,(?<logmsg>.*)" Updated to remove brackets in the logmsg pattern  
Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).   This pertains to the Max Lines setting in the Form... See more...
Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).   This pertains to the Max Lines setting in the Format section of the Search and Reporting App. Previously, Splunk defaulted to displaying 20 or more lines in search results within the Search and Reporting App. As an administrator responsible for extracting Splunk logs across various applications over the years, I never encountered the need to expand brief search results, to read all lines. However, in the recent weeks, possibly following an upgrade of the Splunk Search heads, I've observed that each time I open a new Splunk search window or the existing Splunk tab times out and auto-refreshes, the Format > Max Lines option is reset to 5. Consequently, I find myself changing it after nearly every search, which has become cumbersome. Therefore, Kindly guide me on how to change  the default value to 20 from 5 in the Search and Reporting App on both Search heads? This adjustment would alleviate the challenge faced by most of our customers and end-users who find it cumbersome to modify it for each search. So kindly help on my requirement.
I have an issue with ES not showing all of the views depending on which user is logged in.  Is there a location for permissions of the views?  For example, if I am logged in as a Splunk Admin I can s... See more...
I have an issue with ES not showing all of the views depending on which user is logged in.  Is there a location for permissions of the views?  For example, if I am logged in as a Splunk Admin I can see all of the views: As a ESS admin I see: Most important is not having the incident review one there.  When I go to Configure/All Configurations/General /Navigation as the ESS admin, all of the views are shown for me to move around and configure.  The ribbon remains the same.   Where should I look for what is different?    
@gcusello  Indeed, I have applied the correct sourcetype there to ensure that events are appropriately divided. Nonetheless, the masking of passwords is not taking place as intended.
How do you know that xa goes with x1 and xc goes with x2?  Like literally I'm looking at your example and I don't know how you would have come up with that correlation. Can you tell me how you'd do ... See more...
How do you know that xa goes with x1 and xc goes with x2?  Like literally I'm looking at your example and I don't know how you would have come up with that correlation. Can you tell me how you'd do this manually?  Unless we know this, I have no idea how we'd know how to make it work programatically.
Requesting help with search query. I have application logs in Splunk like, 2024-04-02T12:26:02.244-04:00,severity=DEBUG,thread=main,logger=org.apache.catalina.core.NamingContextListener,{},Creating... See more...
Requesting help with search query. I have application logs in Splunk like, 2024-04-02T12:26:02.244-04:00,severity=DEBUG,thread=main,logger=org.apache.catalina.core.NamingContextListener,{},Creating JNDI naming context 2024-04-02T12:26:02.118-04:00,severity=DEBUG,thread=main,logger=org.apache.catalina.core.NamingContextListener,{}, Adding resource ref UserDatabase ResourceRef[className=org.apache.catalina.UserDatabase,factoryClassLocation=null,factoryClassName=org.apache.naming.factory.ResourceFactory,{type=description,content=User database that can be updated and saved},{type=scope,content=Shareable},{type=auth,content=Container},{type=singleton,content=true},{type=factory,content=org.apache.catalina.users.MemoryUserDatabaseFactory},{type=pathname,content=conf/tomcat-users.xml}] And I'm using following query to separate different sections of the message, index=my_app_index AND source="**/my-app-service.log" AND sourcetype="app_v1"|rex="(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<thread>\S*)\,logger=(?<logger>\S*)\,\{\}\,(?<logmsg>(.)*)"|table mydatetime,logger,thread,_raw,logmsg|rename logmsg AS MESSAGE What I see is, column mydatetime and logmsg(MESSAGE) are empty. What I expect is, column mydatetime contain initial date-time, and logmsg(MESSAGE)  contain the last message part mydatetime logger thread logmsg 2024-04-02T12:26:02.244-04:00 org.apache.catalina.core.NamingContextListener main Creating JNDI naming context 2024-04-02T12:26:02.118-04:00 org.apache.catalina.core.NamingContextListener main Adding resource ref UserDatabase ResourceRef[className=org.apache.catalina.UserDatabase,factoryClassLocation=null,factoryClassName=org.apache.naming.factory.ResourceFactory,{type=description,content=User database that can be updated and saved},{type=scope,content=Shareable},{type=auth,content=Container},{type=singleton,content=true},{type=factory,content=org.apache.catalina.users.MemoryUserDatabaseFactory},{type=pathname,content=conf/tomcat-users.xml}]
3.6 is old and you should update, then the current documentation would work.  But, it also had a simpler system of checkpoints, storing those in files on disk. I cannot remember exactly where th... See more...
3.6 is old and you should update, then the current documentation would work.  But, it also had a simpler system of checkpoints, storing those in files on disk. I cannot remember exactly where those were, but maybe something like $SPLUNK_HOME/var/lib/splunk/modinputs/dbx_input...?  I don't know, I have that directory and it seems familiar, but it's empty on my system because I try to keep up to date.  (Hehehe, you knew I was going to say that, didn't you... ?   
Normally, I've found using the command line (CLI) to set up being a deployment client is easier.  It's a single command on your Splunk Universal Forwarder.  https://docs.splunk.com/Documentation/Splu... See more...
Normally, I've found using the command line (CLI) to set up being a deployment client is easier.  It's a single command on your Splunk Universal Forwarder.  https://docs.splunk.com/Documentation/Splunk/9.2.1/Updating/Configuredeploymentclients Receiving is, as you noticed, on the receiving page.  As long as that's configured (obligatory Splunk Docs link https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Enableareceiver) then it'll accept data from other systems. Now keep in mind those two systems are separate, use different ports and are managed differently/separately, and in fact you can have either one without the other (though some combinations are a bit silly or pointless).  And there's three concepts. Receiving is set up on an indexer (technically, on any full Splunk instance), and enabling it enables that server to accept data (usually over port 9997) from any other Splunk instance, UF or HF (that's "Universal Forwarder" and "Heavy Forwarder", in case you didn't know). Forwarding is the other side of Receiving - Forwarding is set up on the machines you want to have send their data to that receiving system.  Enabling forwarding on a UF tells it to send all the data that it reads/whatever into that receiving system. Deployment client/server is a *management* system.  So you set up receiving and forwarding which is the plumbing of how the data moves around.  Then the deployment server (which is, iirc, enabled by default on all full Splunk instances so does not need to be separately enabled) is listening on port 8089 by default.  Once you have the forwarding and receiving in place, you tell your UF to be a deployment client of the deployment server via the docs I supplied above.  After you've done that, when you go on the server to add data you'll have new options in the 'add data' section to have it get the data from the UF.  Here's another docs link for that.  https://docs.splunk.com/Documentation/Splunk/latest/Data/Forwarddata   Hope that helps!  I know it's got some new concepts and ideas and it's not quite like many other systems.  But it all generally does make sense once you get over the hump. -Rich
Setting supportOctetCountedFraming="off" on the input fixes newlines being encoded to #012.  
Thank you for your help, but I figured out what was wrong. The query I am using pulls data from a relational database. We have created a macro that converts the SPL into the query language for that D... See more...
Thank you for your help, but I figured out what was wrong. The query I am using pulls data from a relational database. We have created a macro that converts the SPL into the query language for that DB and part of that includes the time frame. So essentially, for those queries the time picker is irrelevant. As I stated previously, the query works fine in the splunk search bar. However, in the dashboard, the query is the PARENT Query, and the results are filtered by subsequent child queries. The child queries have no concept of the earliest or latest,  even though they are supposed to inherit that from the parent query, and the SPL doesn't allow you to put that code in after the filter query. Our problem was that since the Parent Query didn't use the earliest and latest times, we deleted the code that defined that after the query. Once I reinserted the code for the earliest and latest times after the Parent Query, the Single with trend lines worked as they should.
@kiran_panchavat Hi, so, which TA should I use for Genesys cloud because the pulse which is the TA asks for pulse info whereas the genesys cloud uses crenedtials which I dont see as an option in conf... See more...
@kiran_panchavat Hi, so, which TA should I use for Genesys cloud because the pulse which is the TA asks for pulse info whereas the genesys cloud uses crenedtials which I dont see as an option in configuration
The Documentation link given belongs to DB Connect version 3.15.0 and talks for "Splunk DB Connect version 3.10.0 or higher". Our DB Connect Version is: 3.6.0 When on the doc link I change the ... See more...
The Documentation link given belongs to DB Connect version 3.15.0 and talks for "Splunk DB Connect version 3.10.0 or higher". Our DB Connect Version is: 3.6.0 When on the doc link I change the DB Connect version to 3.6.0, I receive this: "The topic you've asked to see does not apply to the version you selected. " So it looks documentation tells nothing for backup/restore of DB Connect 3.6.0. Or am I missing something? best regards Altin
Hi @pavithra, I’m a Community Moderator in the Splunk Community. This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend th... See more...
Hi @pavithra, I’m a Community Moderator in the Splunk Community. This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
When writing plain text in the Next Steps field of a notable event such as Mitre ATT&CK it is then shown, when the notable is created, as Mitre ATT&amp;CK which is clearly incorrect. Is it possible t... See more...
When writing plain text in the Next Steps field of a notable event such as Mitre ATT&CK it is then shown, when the notable is created, as Mitre ATT&amp;CK which is clearly incorrect. Is it possible to escape the & character is some way ?   This also happens when using action:url too - [[action|url:Mitre ATT&CK ]]  is shown as Mitre ATT&amp;CK  Any help would be appreciated.
I don't know what is the original rsyslog configuration (and even where that rsyslog is :-)). But your option will only make the tab character (un)escaped. The general option for escaping character... See more...
I don't know what is the original rsyslog configuration (and even where that rsyslog is :-)). But your option will only make the tab character (un)escaped. The general option for escaping characters is parser.escapeControlCharactersOnReceive