All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This worked.......I was able to develop a data model that included the following as a constraint:   NOT (TERM(proc1) OR TERM(proc2) OR ...........OR TERM(procn)) Thanks, Tom
And this rex doesn't produce any error
I re-checked by putting the rex you've provided once again without the equal(=) symbol, but surprisingly the error message comes back with words 'regex='
This regex works with the sample events and is much more efficient according to regex101.com. | rex "(?<mydatetime>[^,]+),severity=(?<severity>[^,]+),thread=(?<thread>[^,]+),logger=(?<logger>[^,]+),... See more...
This regex works with the sample events and is much more efficient according to regex101.com. | rex "(?<mydatetime>[^,]+),severity=(?<severity>[^,]+),thread=(?<thread>[^,]+),logger=(?<logger>[^,]+),\{\},(?<logmsg>.*)"  
Again, what's with the = after the regex? Is this just a typo?
Assuming that your summary index has a single event for each host for each day that it has reported, then you should be able to divide your count (from the stats command you shared) by 7 and multiply... See more...
Assuming that your summary index has a single event for each host for each day that it has reported, then you should be able to divide your count (from the stats command you shared) by 7 and multiply by 100 to get the percentage "uptime"
Hi team, I am following the below instructions to bring Genesys cloud logs in to splunk  https://splunkbase.splunk.com/app/6552 Under the details and intsallation instruction of the app, I cant fin... See more...
Hi team, I am following the below instructions to bring Genesys cloud logs in to splunk  https://splunkbase.splunk.com/app/6552 Under the details and intsallation instruction of the app, I cant find the configuration and it also did not prompted me for the input configuration
I am trying to determine a hosts percent of time it logs to splunk within a summary index we created. We have an index called "summary_index" and a field called "host_reported" that shows if a host h... See more...
I am trying to determine a hosts percent of time it logs to splunk within a summary index we created. We have an index called "summary_index" and a field called "host_reported" that shows if a host has been seen in the past hour.  Here is the search i am using to see all hosts in the summary index that were seen within the last 24hrs: index=summary_index | stats count by host_reported What i am trying to do is develop a search that shows me what percent of the time over the past 7 days each host has reported to this summary index. So for example if host A only reported to the summary index 6 of the 7 days, i want it to show it's "up time "was 86% for the past 7 days. 
Yes indeed it does solve the issue, but now there's a new issue Streamed search execute failed because: Error in 'rex' command: regex="(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<threa... See more...
Yes indeed it does solve the issue, but now there's a new issue Streamed search execute failed because: Error in 'rex' command: regex="(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<thread>\S*)\,logger=(?<logger>\S*)\,\{\}\,(?<logmsg>(.)*)" has exceeded the configured depth_limit, consider raising the value in limits.conf..
Hello Luiz, In Dashboard, you would see the option on left panel- Interpolate data gaps. Kindly select that. 
Splunk support concluded it was an "as yet discovered software bug"
I think I've read this in its entirety 4 times now over the past week.  I am having difficulty understanding what the problem is.  Let me walk through it and see if writing it down helps... You work... See more...
I think I've read this in its entirety 4 times now over the past week.  I am having difficulty understanding what the problem is.  Let me walk through it and see if writing it down helps... You work in IST which is +10.5 hours from CST/DST. You have alert, which the cron schedule says to fire at 1 PM (13:00) in CDT.  That's 11:30 PM (23:30) IST.  You maybe mistyped "11:00 PM" for that, and maybe that's the issue? Disregarding the 11:00/11:30 issue, the second thing I think you mentioned is that the alert didn't actually come until 11:44, which is a 14 minute delay.   The search itself is a lightweight, it should run practically instantly and run-time shouldn't be an issue.  The most obvious reason for the 14 minute delay is because your server is too busy at 1 PM CDT to get this out any faster.  You should check into that - there's a lot of resources available inside Splunk to see what might be going on, but my guess is just that it's a busy time of the day, coupled with possibly too many "heavy" searches that trigger then.  You could also increase the priority of that search, though this doesn't address the core problem and may actually make things *worse* and not better.  I mean, maybe better for this one search, and being so fast that's probably OK, but still, it's just trying to hide the bigger problem.   Anyway, hope that helps and happy Splunking! -Rich  
You don't need the = after the rex   | rex "(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<thread>\S*)\,logger=(?<logger>\S*)\,\{\}\,(?<logmsg>.*)" Updated to remove brackets in the log... See more...
You don't need the = after the rex   | rex "(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<thread>\S*)\,logger=(?<logger>\S*)\,\{\}\,(?<logmsg>.*)" Updated to remove brackets in the logmsg pattern  
Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).   This pertains to the Max Lines setting in the Form... See more...
Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).   This pertains to the Max Lines setting in the Format section of the Search and Reporting App. Previously, Splunk defaulted to displaying 20 or more lines in search results within the Search and Reporting App. As an administrator responsible for extracting Splunk logs across various applications over the years, I never encountered the need to expand brief search results, to read all lines. However, in the recent weeks, possibly following an upgrade of the Splunk Search heads, I've observed that each time I open a new Splunk search window or the existing Splunk tab times out and auto-refreshes, the Format > Max Lines option is reset to 5. Consequently, I find myself changing it after nearly every search, which has become cumbersome. Therefore, Kindly guide me on how to change  the default value to 20 from 5 in the Search and Reporting App on both Search heads? This adjustment would alleviate the challenge faced by most of our customers and end-users who find it cumbersome to modify it for each search. So kindly help on my requirement.
I have an issue with ES not showing all of the views depending on which user is logged in.  Is there a location for permissions of the views?  For example, if I am logged in as a Splunk Admin I can s... See more...
I have an issue with ES not showing all of the views depending on which user is logged in.  Is there a location for permissions of the views?  For example, if I am logged in as a Splunk Admin I can see all of the views: As a ESS admin I see: Most important is not having the incident review one there.  When I go to Configure/All Configurations/General /Navigation as the ESS admin, all of the views are shown for me to move around and configure.  The ribbon remains the same.   Where should I look for what is different?    
@gcusello  Indeed, I have applied the correct sourcetype there to ensure that events are appropriately divided. Nonetheless, the masking of passwords is not taking place as intended.
How do you know that xa goes with x1 and xc goes with x2?  Like literally I'm looking at your example and I don't know how you would have come up with that correlation. Can you tell me how you'd do ... See more...
How do you know that xa goes with x1 and xc goes with x2?  Like literally I'm looking at your example and I don't know how you would have come up with that correlation. Can you tell me how you'd do this manually?  Unless we know this, I have no idea how we'd know how to make it work programatically.
Requesting help with search query. I have application logs in Splunk like, 2024-04-02T12:26:02.244-04:00,severity=DEBUG,thread=main,logger=org.apache.catalina.core.NamingContextListener,{},Creating... See more...
Requesting help with search query. I have application logs in Splunk like, 2024-04-02T12:26:02.244-04:00,severity=DEBUG,thread=main,logger=org.apache.catalina.core.NamingContextListener,{},Creating JNDI naming context 2024-04-02T12:26:02.118-04:00,severity=DEBUG,thread=main,logger=org.apache.catalina.core.NamingContextListener,{}, Adding resource ref UserDatabase ResourceRef[className=org.apache.catalina.UserDatabase,factoryClassLocation=null,factoryClassName=org.apache.naming.factory.ResourceFactory,{type=description,content=User database that can be updated and saved},{type=scope,content=Shareable},{type=auth,content=Container},{type=singleton,content=true},{type=factory,content=org.apache.catalina.users.MemoryUserDatabaseFactory},{type=pathname,content=conf/tomcat-users.xml}] And I'm using following query to separate different sections of the message, index=my_app_index AND source="**/my-app-service.log" AND sourcetype="app_v1"|rex="(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<thread>\S*)\,logger=(?<logger>\S*)\,\{\}\,(?<logmsg>(.)*)"|table mydatetime,logger,thread,_raw,logmsg|rename logmsg AS MESSAGE What I see is, column mydatetime and logmsg(MESSAGE) are empty. What I expect is, column mydatetime contain initial date-time, and logmsg(MESSAGE)  contain the last message part mydatetime logger thread logmsg 2024-04-02T12:26:02.244-04:00 org.apache.catalina.core.NamingContextListener main Creating JNDI naming context 2024-04-02T12:26:02.118-04:00 org.apache.catalina.core.NamingContextListener main Adding resource ref UserDatabase ResourceRef[className=org.apache.catalina.UserDatabase,factoryClassLocation=null,factoryClassName=org.apache.naming.factory.ResourceFactory,{type=description,content=User database that can be updated and saved},{type=scope,content=Shareable},{type=auth,content=Container},{type=singleton,content=true},{type=factory,content=org.apache.catalina.users.MemoryUserDatabaseFactory},{type=pathname,content=conf/tomcat-users.xml}]
3.6 is old and you should update, then the current documentation would work.  But, it also had a simpler system of checkpoints, storing those in files on disk. I cannot remember exactly where th... See more...
3.6 is old and you should update, then the current documentation would work.  But, it also had a simpler system of checkpoints, storing those in files on disk. I cannot remember exactly where those were, but maybe something like $SPLUNK_HOME/var/lib/splunk/modinputs/dbx_input...?  I don't know, I have that directory and it seems familiar, but it's empty on my system because I try to keep up to date.  (Hehehe, you knew I was going to say that, didn't you... ?   
Normally, I've found using the command line (CLI) to set up being a deployment client is easier.  It's a single command on your Splunk Universal Forwarder.  https://docs.splunk.com/Documentation/Splu... See more...
Normally, I've found using the command line (CLI) to set up being a deployment client is easier.  It's a single command on your Splunk Universal Forwarder.  https://docs.splunk.com/Documentation/Splunk/9.2.1/Updating/Configuredeploymentclients Receiving is, as you noticed, on the receiving page.  As long as that's configured (obligatory Splunk Docs link https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Enableareceiver) then it'll accept data from other systems. Now keep in mind those two systems are separate, use different ports and are managed differently/separately, and in fact you can have either one without the other (though some combinations are a bit silly or pointless).  And there's three concepts. Receiving is set up on an indexer (technically, on any full Splunk instance), and enabling it enables that server to accept data (usually over port 9997) from any other Splunk instance, UF or HF (that's "Universal Forwarder" and "Heavy Forwarder", in case you didn't know). Forwarding is the other side of Receiving - Forwarding is set up on the machines you want to have send their data to that receiving system.  Enabling forwarding on a UF tells it to send all the data that it reads/whatever into that receiving system. Deployment client/server is a *management* system.  So you set up receiving and forwarding which is the plumbing of how the data moves around.  Then the deployment server (which is, iirc, enabled by default on all full Splunk instances so does not need to be separately enabled) is listening on port 8089 by default.  Once you have the forwarding and receiving in place, you tell your UF to be a deployment client of the deployment server via the docs I supplied above.  After you've done that, when you go on the server to add data you'll have new options in the 'add data' section to have it get the data from the UF.  Here's another docs link for that.  https://docs.splunk.com/Documentation/Splunk/latest/Data/Forwarddata   Hope that helps!  I know it's got some new concepts and ideas and it's not quite like many other systems.  But it all generally does make sense once you get over the hump. -Rich