All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).   This pertains to the Max Lines setting in the Form... See more...
Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).   This pertains to the Max Lines setting in the Format section of the Search and Reporting App. Previously, Splunk defaulted to displaying 20 or more lines in search results within the Search and Reporting App. As an administrator responsible for extracting Splunk logs across various applications over the years, I never encountered the need to expand brief search results, to read all lines. However, in the recent weeks, possibly following an upgrade of the Splunk Search heads, I've observed that each time I open a new Splunk search window or the existing Splunk tab times out and auto-refreshes, the Format > Max Lines option is reset to 5. Consequently, I find myself changing it after nearly every search, which has become cumbersome. Therefore, Kindly guide me on how to change  the default value to 20 from 5 in the Search and Reporting App on both Search heads? This adjustment would alleviate the challenge faced by most of our customers and end-users who find it cumbersome to modify it for each search. So kindly help on my requirement.
I have an issue with ES not showing all of the views depending on which user is logged in.  Is there a location for permissions of the views?  For example, if I am logged in as a Splunk Admin I can s... See more...
I have an issue with ES not showing all of the views depending on which user is logged in.  Is there a location for permissions of the views?  For example, if I am logged in as a Splunk Admin I can see all of the views: As a ESS admin I see: Most important is not having the incident review one there.  When I go to Configure/All Configurations/General /Navigation as the ESS admin, all of the views are shown for me to move around and configure.  The ribbon remains the same.   Where should I look for what is different?    
@gcusello  Indeed, I have applied the correct sourcetype there to ensure that events are appropriately divided. Nonetheless, the masking of passwords is not taking place as intended.
How do you know that xa goes with x1 and xc goes with x2?  Like literally I'm looking at your example and I don't know how you would have come up with that correlation. Can you tell me how you'd do ... See more...
How do you know that xa goes with x1 and xc goes with x2?  Like literally I'm looking at your example and I don't know how you would have come up with that correlation. Can you tell me how you'd do this manually?  Unless we know this, I have no idea how we'd know how to make it work programatically.
Requesting help with search query. I have application logs in Splunk like, 2024-04-02T12:26:02.244-04:00,severity=DEBUG,thread=main,logger=org.apache.catalina.core.NamingContextListener,{},Creating... See more...
Requesting help with search query. I have application logs in Splunk like, 2024-04-02T12:26:02.244-04:00,severity=DEBUG,thread=main,logger=org.apache.catalina.core.NamingContextListener,{},Creating JNDI naming context 2024-04-02T12:26:02.118-04:00,severity=DEBUG,thread=main,logger=org.apache.catalina.core.NamingContextListener,{}, Adding resource ref UserDatabase ResourceRef[className=org.apache.catalina.UserDatabase,factoryClassLocation=null,factoryClassName=org.apache.naming.factory.ResourceFactory,{type=description,content=User database that can be updated and saved},{type=scope,content=Shareable},{type=auth,content=Container},{type=singleton,content=true},{type=factory,content=org.apache.catalina.users.MemoryUserDatabaseFactory},{type=pathname,content=conf/tomcat-users.xml}] And I'm using following query to separate different sections of the message, index=my_app_index AND source="**/my-app-service.log" AND sourcetype="app_v1"|rex="(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<thread>\S*)\,logger=(?<logger>\S*)\,\{\}\,(?<logmsg>(.)*)"|table mydatetime,logger,thread,_raw,logmsg|rename logmsg AS MESSAGE What I see is, column mydatetime and logmsg(MESSAGE) are empty. What I expect is, column mydatetime contain initial date-time, and logmsg(MESSAGE)  contain the last message part mydatetime logger thread logmsg 2024-04-02T12:26:02.244-04:00 org.apache.catalina.core.NamingContextListener main Creating JNDI naming context 2024-04-02T12:26:02.118-04:00 org.apache.catalina.core.NamingContextListener main Adding resource ref UserDatabase ResourceRef[className=org.apache.catalina.UserDatabase,factoryClassLocation=null,factoryClassName=org.apache.naming.factory.ResourceFactory,{type=description,content=User database that can be updated and saved},{type=scope,content=Shareable},{type=auth,content=Container},{type=singleton,content=true},{type=factory,content=org.apache.catalina.users.MemoryUserDatabaseFactory},{type=pathname,content=conf/tomcat-users.xml}]
3.6 is old and you should update, then the current documentation would work.  But, it also had a simpler system of checkpoints, storing those in files on disk. I cannot remember exactly where th... See more...
3.6 is old and you should update, then the current documentation would work.  But, it also had a simpler system of checkpoints, storing those in files on disk. I cannot remember exactly where those were, but maybe something like $SPLUNK_HOME/var/lib/splunk/modinputs/dbx_input...?  I don't know, I have that directory and it seems familiar, but it's empty on my system because I try to keep up to date.  (Hehehe, you knew I was going to say that, didn't you... ?   
Normally, I've found using the command line (CLI) to set up being a deployment client is easier.  It's a single command on your Splunk Universal Forwarder.  https://docs.splunk.com/Documentation/Splu... See more...
Normally, I've found using the command line (CLI) to set up being a deployment client is easier.  It's a single command on your Splunk Universal Forwarder.  https://docs.splunk.com/Documentation/Splunk/9.2.1/Updating/Configuredeploymentclients Receiving is, as you noticed, on the receiving page.  As long as that's configured (obligatory Splunk Docs link https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Enableareceiver) then it'll accept data from other systems. Now keep in mind those two systems are separate, use different ports and are managed differently/separately, and in fact you can have either one without the other (though some combinations are a bit silly or pointless).  And there's three concepts. Receiving is set up on an indexer (technically, on any full Splunk instance), and enabling it enables that server to accept data (usually over port 9997) from any other Splunk instance, UF or HF (that's "Universal Forwarder" and "Heavy Forwarder", in case you didn't know). Forwarding is the other side of Receiving - Forwarding is set up on the machines you want to have send their data to that receiving system.  Enabling forwarding on a UF tells it to send all the data that it reads/whatever into that receiving system. Deployment client/server is a *management* system.  So you set up receiving and forwarding which is the plumbing of how the data moves around.  Then the deployment server (which is, iirc, enabled by default on all full Splunk instances so does not need to be separately enabled) is listening on port 8089 by default.  Once you have the forwarding and receiving in place, you tell your UF to be a deployment client of the deployment server via the docs I supplied above.  After you've done that, when you go on the server to add data you'll have new options in the 'add data' section to have it get the data from the UF.  Here's another docs link for that.  https://docs.splunk.com/Documentation/Splunk/latest/Data/Forwarddata   Hope that helps!  I know it's got some new concepts and ideas and it's not quite like many other systems.  But it all generally does make sense once you get over the hump. -Rich
Setting supportOctetCountedFraming="off" on the input fixes newlines being encoded to #012.  
Thank you for your help, but I figured out what was wrong. The query I am using pulls data from a relational database. We have created a macro that converts the SPL into the query language for that D... See more...
Thank you for your help, but I figured out what was wrong. The query I am using pulls data from a relational database. We have created a macro that converts the SPL into the query language for that DB and part of that includes the time frame. So essentially, for those queries the time picker is irrelevant. As I stated previously, the query works fine in the splunk search bar. However, in the dashboard, the query is the PARENT Query, and the results are filtered by subsequent child queries. The child queries have no concept of the earliest or latest,  even though they are supposed to inherit that from the parent query, and the SPL doesn't allow you to put that code in after the filter query. Our problem was that since the Parent Query didn't use the earliest and latest times, we deleted the code that defined that after the query. Once I reinserted the code for the earliest and latest times after the Parent Query, the Single with trend lines worked as they should.
@kiran_panchavat Hi, so, which TA should I use for Genesys cloud because the pulse which is the TA asks for pulse info whereas the genesys cloud uses crenedtials which I dont see as an option in conf... See more...
@kiran_panchavat Hi, so, which TA should I use for Genesys cloud because the pulse which is the TA asks for pulse info whereas the genesys cloud uses crenedtials which I dont see as an option in configuration
The Documentation link given belongs to DB Connect version 3.15.0 and talks for "Splunk DB Connect version 3.10.0 or higher". Our DB Connect Version is: 3.6.0 When on the doc link I change the ... See more...
The Documentation link given belongs to DB Connect version 3.15.0 and talks for "Splunk DB Connect version 3.10.0 or higher". Our DB Connect Version is: 3.6.0 When on the doc link I change the DB Connect version to 3.6.0, I receive this: "The topic you've asked to see does not apply to the version you selected. " So it looks documentation tells nothing for backup/restore of DB Connect 3.6.0. Or am I missing something? best regards Altin
Hi @pavithra, I’m a Community Moderator in the Splunk Community. This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend th... See more...
Hi @pavithra, I’m a Community Moderator in the Splunk Community. This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
When writing plain text in the Next Steps field of a notable event such as Mitre ATT&CK it is then shown, when the notable is created, as Mitre ATT&amp;CK which is clearly incorrect. Is it possible t... See more...
When writing plain text in the Next Steps field of a notable event such as Mitre ATT&CK it is then shown, when the notable is created, as Mitre ATT&amp;CK which is clearly incorrect. Is it possible to escape the & character is some way ?   This also happens when using action:url too - [[action|url:Mitre ATT&CK ]]  is shown as Mitre ATT&amp;CK  Any help would be appreciated.
I don't know what is the original rsyslog configuration (and even where that rsyslog is :-)). But your option will only make the tab character (un)escaped. The general option for escaping character... See more...
I don't know what is the original rsyslog configuration (and even where that rsyslog is :-)). But your option will only make the tab character (un)escaped. The general option for escaping characters is parser.escapeControlCharactersOnReceive
@ITWhisperer Events are coming in xml format. That is why I am using xpath.  Above queries are running slow. Could you please advice me how I can optimize ?
curl -k -u svc_aas -d search="search index=aas sourcetype=syslog" https://splunk-prod-api.internal.xxxx.com/services/search/jobs     I want to run this using Postman can someone help me frame the Po... See more...
curl -k -u svc_aas -d search="search index=aas sourcetype=syslog" https://splunk-prod-api.internal.xxxx.com/services/search/jobs     I want to run this using Postman can someone help me frame the Postman queries to search and retrieve Splunk logs
@ITWhisperer thanks for your reply. You have definitely put me on the right path. My original data has more ID and attribute fields, so I needed to find a way to generalize your solution further. ... See more...
@ITWhisperer thanks for your reply. You have definitely put me on the right path. My original data has more ID and attribute fields, so I needed to find a way to generalize your solution further. I had a bit of a struggle to understand why you chose these particular three eventstats lines with their respective values(...) and by ... clauses. I believe a more generic recipe would be like this: If you have n ID fields and k attribute fields, do one eventstats ... by ID for each of the n ID fields: | eventstats values(ID_2) as ID_2, .., values(ID_n) as ID_n, values(attr_1) as attr_1, .., values(attr_k) as attr_k by ID_1 | eventstats values(ID_1) as ID_1, values(ID_3) as ID_3, .., values(ID_n) as ID_n, values(attr_1) as attr_1, .., values(attr_k) as attr_k by ID_2 | eventstats .. by ID_3 | eventstats .. by ID_n-1 | eventstats values(ID_1) as ID_1, .., values(ID_n-1) as ID_n-1, values(attr_1) as attr_1, .., values(attr_k) as attr_k by ID_n   The subsequent mvexpand and streamtstats count ... where count==1 can be simplified as: | fillnull value="N/A" ```or stats by fieldlist ignores rows with at least one null value in fieldlist``` | stats count by ID_1, ..., ID_n, attr_1, ..., attr_k | fields - count | foreach * [ | eval <<FIELD>>=if(<<FIELD>>=="N/A",null(),<<FIELD>>) ] ```undo the N/A replacement``` This stanza even expands possible multivalues in all of the n attributes - otherwise each attribute field with potential multivalues would need its own explicit mvexpand <attr> . It also avoids mvexpand's potential memory issue.   For my own sample data, it would read: | makeresults | eval _raw="ID_A;ID_B;X1;X2 A1;B1;X1_1;X2_1 A2;B2;X1_2A;X2_2 A2;B2;X1_2B;X2_2 A3;B3;X1_3;X2_3 " | multikv forceheader=1 | table ID_A, ID_B, X1, X2 | append [ | makeresults | eval _raw="ID_A;ID_B;Y1;Y2 A2;B2;Y1_2; A2;B2;;Y2_2 A3;B3;;Y2_3A A3;B3;;Y2_3B A4;B4;Y1_4;Y2_4 " | multikv forceheader=1 | table ID_A, ID_B, Y1, Y2 ] | append [ | makeresults | eval _raw="ID_B;ID_C;Z1 B1;C1;Z1_1 B3;C3;Z1_3 B5;C5;Z1_5 " | multikv forceheader=1 | table ID_B, ID_C, Z1 ] | table ID_A, ID_B, ID_C, X1, X2, Y1, Y2, Z1 ```--- relevant code starts here ---``` | eventstats values(ID_B) as ID_B, values(ID_C) as ID_C, values(X1) as X1, values(X2) as X2, values(Y1) as Y1, values(Y2) as Y2, values(Z1) as Z1 by ID_A | eventstats values(ID_A) as ID_A, values(ID_C) as ID_C, values(X1) as X1, values(X2) as X2, values(Y1) as Y1, values(Y2) as Y2, values(Z1) as Z1 by ID_B | eventstats values(ID_A) as ID_A, values(ID_B) as ID_B, values(X1) as X1, values(X2) as X2, values(Y1) as Y1, values(Y2) as Y2, values(Z1) as Z1 by ID_C | fillnull value="N/A" ```or stats by fieldlist ignores rows with at least one null value in fieldlist``` | stats count by ID_A, ID_B, ID_C, X1, X2, Y1, Y2, Z1 | fields - count | foreach * [ | eval <<FIELD>>=if(<<FIELD>>=="N/A",null(),<<FIELD>>) ] ```undo the N/A replacement```   Please let me know if I have overlooked something.
I'm by no means an rsyslog guru but ran into it recently.  There may be a better way to solve this but the quick fix was to turn off both supportOctetCountedFraming (input) and escapeControlCharacter... See more...
I'm by no means an rsyslog guru but ran into it recently.  There may be a better way to solve this but the quick fix was to turn off both supportOctetCountedFraming (input) and escapeControlCharacterTab (global).   $EscapeControlCharacterTab off [...other config...] input(type="imtcp" port="<port>" name="<name>" ruleset="<ruleset>" supportOctetCountedFraming="off")    
Hi @anandhalagaras1, what's the sourcetype to apply the masking? I suppose that sourcetype in the props.conf stanza header it's only for example and that in your installation you have the correct s... See more...
Hi @anandhalagaras1, what's the sourcetype to apply the masking? I suppose that sourcetype in the props.conf stanza header it's only for example and that in your installation you have the correct sourcetype to apply the transformation. ciao. Giuseppe
may i know the exact answer Please