All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @abroun, to help you I need also the second search. In few words, you have to correlate results from both the searches using stats BY common key. Ciao. Giuseppe
Hey, I have a problem preparing a Splunjk query. Could you assist me? I have a simple query that returns a table with a few fields:     some-search | fields id, time | table id, time     I a... See more...
Hey, I have a problem preparing a Splunjk query. Could you assist me? I have a simple query that returns a table with a few fields:     some-search | fields id, time | table id, time     I also have a macro with two arguments (id and time) that returns a table with status and type fields. I want to modify the first query somehow to run a subquery for each row by calling my macro and appending the fields to the final table. Finally, I want to have a table with four fields: id, time, status, and type (where status and type were obtained by calling a subquery with id and time). Is it possible?
Hello,   I'm using Splunk Enterprise 9.1.2 on my local Linux machine (in a docker container). When documenting a new custom command in the searchbnf.conf file in my Splunk app, one of the fields... See more...
Hello,   I'm using Splunk Enterprise 9.1.2 on my local Linux machine (in a docker container). When documenting a new custom command in the searchbnf.conf file in my Splunk app, one of the fields is called "description". However, I can't seem to find where this field is actually used? The command help that pops up when I start typing the command in the search assistant in Splunk Web only shows the "shortdesc" field. This is also true for command from apps installed from SplunkBase. I'm attaching a screenshot that shows what I see what I start typing a command. In this case I'm using the "summary" command from the Splunk ML app. The entry in searchbnf.conf is: [summary-command] syntax = summary <model_name> shortdesc = Returns a summary of a machine learning model that was learned using the "fit" command. description = The summary is algorithm specific. For <LinearRegression>, \ the summary is a list of coefficients. For \ <LogisticRegression>, the summary is a list of \ coefficients for each class. comment1 = Inspect a previously-learned LinearRegression model, "errors_over_time": example1 = | summary errors_over_time usage = public related = fit apply listmodels deletemodel As can be seen, only "shortdesc" shows in the little help window that pops up. Is this normal? Is there something to change in some configuration to make the full description show? Is there a different place I can find the full description for the command? FYI, the "Learn more" link goes to http://localhost:8000/en-US/help?location=search_app.assist.summary which gets automatically redirected to https://docs.splunk.com/Documentation/MLApp/5.4.1/User/Customsearchcommands?ref=hk#summary. However, for my custom command it gets redirected to https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/3rdpartycustomcommands?ref=hk so not relevant... Thanks!
any further input after answering your questions?  
Hello Splunk Community, I'm seeking assistance with ingesting data from Tenable to my Splunk AIO instance. Every time I attempt to configure the TA for Tenable, I encounter HTTPS errors. I suspect... See more...
Hello Splunk Community, I'm seeking assistance with ingesting data from Tenable to my Splunk AIO instance. Every time I attempt to configure the TA for Tenable, I encounter HTTPS errors. I suspect there may be an error in the steps I'm following, and I would greatly appreciate someone guiding me through the process step by step. Additionally, I'm looking for guidance on how to ensure that the dashboards are running properly on the Tenable app within Splunk.
Hi Guys, I am using timeline visualization in my Splunk dashboard to show total elapsed time. But in some times its not good. Is there any examples to show Total elapsed time in different visualizat... See more...
Hi Guys, I am using timeline visualization in my Splunk dashboard to show total elapsed time. But in some times its not good. Is there any examples to show Total elapsed time in different visualization .Is there any best way to show Total Elapsed time in graph in splunk dashboard.   IN my splunk Query: |eventstats min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time by correlationId | eval StartTime=round(strptime(Logon_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=round(strptime(Logoff_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval ElapsedTimeInSecs=EndTime-StartTime | eval "Total Elapsed Time"=strftime(ElapsedTimeInSecs,"%H:%M:%S") IN my dashboard: | sort -Timestamp | eval ElapsedTimeInSecs=ElapsedTimeInSecs*1000 | table Timestamp correlationId Status ElapsedTimeInSecs    
Hi @RanjithaN99, Indexer Cluster is managed by the Cluster Manager so you have to create the new indexes.conf in this server that deploys it to the indexers; for more infos see at https://docs.splun... See more...
Hi @RanjithaN99, Indexer Cluster is managed by the Cluster Manager so you have to create the new indexes.conf in this server that deploys it to the indexers; for more infos see at https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Aboutclusters in few words, you have to create a stanza in indexes.conf with the new index using the CLI and push the configuration using GUI. For data forwarding from te SH to the Indexers, you should already have this configuration because it's a best practice to send internal logs from all the Splunk servers to Indexers. If not go in [Settings > Forwarding and Receiving > Forwarding] and configure Forwarding. Ciao. Giuseppe
4/2/24 5:57:10.000 AM   02-APR-2024 05:57:10 * (CONNECT_DATA=(SID=cpdb11)(CID=(PROGRAM=perl)(HOST=a5071ue1plora04)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.18.76.29)(PORT=53100)) * e... See more...
4/2/24 5:57:10.000 AM   02-APR-2024 05:57:10 * (CONNECT_DATA=(SID=cpdb11)(CID=(PROGRAM=perl)(HOST=a5071ue1plora04)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.18.76.29)(PORT=53100)) * establish * cpdb11 * 0     4/2/24 5:57:10.000 AM   2024-04-02T05:57:10.270270-04:00     4/2/24 5:57:09.000 AM   02-APR-2024 05:57:09 * service_update * cpdb11 * 0     4/2/24 5:57:09.000 AM   02-APR-2024 05:57:09 * service_update * cpdb11 * 0     4/2/24 5:57:08.000 AM   TNS-12505: TNS:listener does not currently know of SID given in connect descriptor     4/2/24 5:57:08.000 AM   02-APR-2024 05:57:08 * (CONNECT_DATA=(SID=pdb09)(CID=(PROGRAM=perl)(HOST=a5071ue1plora04)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.18.76.29)(PORT=53098)) * establish * pdb09 * 12505     4/2/24 5:57:08.000 AM   TNS-12505: TNS:listener does not currently know of SID given in connect descriptor     4/2/24 5:57:08.000 AM   02-APR-2024 05:57:08 * (CONNECT_DATA=(SID=pdb09)(CID=(PROGRAM=perl)(HOST=a5071ue1plora04)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.18.76.29)(PORT=53096)) * establish * pdb09 * 12505     4/2/24 5:57:08.000 AM   2024-04-02T05:57:08.619205-04:00
index="mulesoft" applicationName="api" |spath content.payload{} |mvexpand content.payload{}| transaction correlationId | rename "content.payload{}.AP Import flow processing results{}.requestID" as... See more...
index="mulesoft" applicationName="api" |spath content.payload{} |mvexpand content.payload{}| transaction correlationId | rename "content.payload{}.AP Import flow processing results{}.requestID" as RequestID "content.payload{}.GL Import flow processing results{}.impConReqId" as ImpConReqId content.payload{} as response |eval OracleRequestId="RequestID: ".if(isnull(RequestID),0,RequestID)." ImpConReqId: ".if(isnull(ImpConReqId),0,ImpConReqId)|table OracleRequestId response
Please share your sample event in a code block </> not an image of them? Also, what settings do you currently have? I am assuming you are looking to do this at ingest time rather than search time, ... See more...
Please share your sample event in a code block </> not an image of them? Also, what settings do you currently have? I am assuming you are looking to do this at ingest time rather than search time, please clarify?
Hi, I am working in a distributed Splunk environment with one search head and an indexer cluster. I am trying to monitor a path that is on the search head and I created a monitor input from the web... See more...
Hi, I am working in a distributed Splunk environment with one search head and an indexer cluster. I am trying to monitor a path that is on the search head and I created a monitor input from the web GUI. How do I create an index on the indexer cluster and configure forwarding from the search head to the indexer cluster. Please help me. Thanks   
What is the full search?
index="mulesoft" applicationName="api" |spath content.payload{} |mvexpand content.payload{}| transaction correlationId | rename "content.payload{}.AP Import flow processing results{}.requestID" as... See more...
index="mulesoft" applicationName="api" |spath content.payload{} |mvexpand content.payload{}| transaction correlationId | rename "content.payload{}.AP Import flow processing results{}.requestID" as RequestID "content.payload{}.GL Import flow processing results{}.impConReqId" as ImpConReqId content.payload{} as response |eval OracleRequestId="RequestID: ".if(isnull(RequestID),0,RequestID)." ImpConReqId: ".if(isnull(ImpConReqId),0,ImpConReqId)|table OracleRequestId response
What is your current search?
What about if you set the initial value as well as the default value?
Hi @ITWhisperer  Yes its working i used isnull before the field values its working .But in below senario its not showing any values. Out of three there are two null values in impConReqId.so its not ... See more...
Hi @ITWhisperer  Yes its working i used isnull before the field values its working .But in below senario its not showing any values. Out of three there are two null values in impConReqId.so its not showing any values in table AP Import flow related results : Extract has no AP records to Import into Oracle { "GL Import flow processing results" : [ { "concurBatchId" : "463", "batchId" : "6393", "count" : "1000", "impConReqId" : null, "errorMessage" : null, "filename" : "81711505038.csv" }, { "concurBatchId" : "463", "batchId" : "6393", "count" : "1000", "impConReqId" : null, "errorMessage" : null, "filename" : "11505038.csv" }, { "concurBatchId" : "463", "batchId" : "6393", "count" : "1000", "impConReqId" : null, "errorMessage" : null, "filename" : "CONCUR_GLJE_37681711505038.csv" }, { "concurBatchId" : "463", "batchId" : "6393", "count" : "768", "impConReqId" : "101539554", "errorMessage" : null, "filename" : "711505038.csv" } ] }  
Hi @kiran_panchavat  @richgalloway  thankyou for your response! Full error message: Error message : [Indexer-x] The search process with search_id="remote_ip-<xxx>" may have returned partial resul... See more...
Hi @kiran_panchavat  @richgalloway  thankyou for your response! Full error message: Error message : [Indexer-x] The search process with search_id="remote_ip-<xxx>" may have returned partial results. Try running your search again. If you see this error repeatedly, review search.log for details or contact your Splunk administrator. We implemented smartstore for a high volume index a few weeks ago and have since been experiencing issues with search and replication factors not meeting for smartstore enabled indexes. We raised a support case with Splunk about this and they informed us that it is a known bug with no fix version available. We hadn't seen this issue before last week, but it's been showing up in most searches for the past 3-4 days (with smartstore index). Furthermore, we are experiencing search performance issues on the smartstore index; either it takes a long time to fetch results or the search gets stuck if we run the query for more than 24 hours. index configuration: [smartstore_index] frozenTimePeriodInSecs = 15552000 repFactor = auto maxDataSize = 1000 maxHotBuckets = 30 hotlist_recency_secs = 86400maxGlobalRawDataSizeMB = 62914560 homePath = /data/smartstore_index/db coldPath = /data/smartstore_index/colddb thawedPath = /data/smartstore_index/thaweddb remotePath = volume:remote_store/smartstore_index Approx daily ingestion on index: 2TB per day. Local SSD volume size: 16TB Remote location: S3 bucket We're not sure, if we're receiving this error because of search & replication factor issue, Request help to fix.
I am inputting my regex in PCRE format and it's still not working. I am trying to exclude all static resources in one Regex. (?i).*\.(jpeg|jpg|png|gif|jpeg|pdf|txt|js|html|tff|css|svg|png|pdf|dll|y... See more...
I am inputting my regex in PCRE format and it's still not working. I am trying to exclude all static resources in one Regex. (?i).*\.(jpeg|jpg|png|gif|jpeg|pdf|txt|js|html|tff|css|svg|png|pdf|dll|yml|yaml|ico|env|gz|bak)$
Hello Everyone, We have installed Splunk Enterprise on individual servers for each individual Splunk component in temporary site and moved it to permanent location. In temporary site we have named... See more...
Hello Everyone, We have installed Splunk Enterprise on individual servers for each individual Splunk component in temporary site and moved it to permanent location. In temporary site we have named indexers and cluster master with "-t" (i.e., xxlsplunkcm1-t & xxlsplunkidx01-t,02-t) indicating the temporary. Since the Indexers are physical servers we are building the new physical Servers and naming them as xxlsplunkidx01 & 02 respectively. We want to rename the Cluster master as xxlsplunkcm1. Can you help me, what are the validation step and pre check to be made in order to have a healthy Splunk environment.