All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

4/2/24 5:57:10.000 AM   02-APR-2024 05:57:10 * (CONNECT_DATA=(SID=cpdb11)(CID=(PROGRAM=perl)(HOST=a5071ue1plora04)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.18.76.29)(PORT=53100)) * e... See more...
4/2/24 5:57:10.000 AM   02-APR-2024 05:57:10 * (CONNECT_DATA=(SID=cpdb11)(CID=(PROGRAM=perl)(HOST=a5071ue1plora04)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.18.76.29)(PORT=53100)) * establish * cpdb11 * 0     4/2/24 5:57:10.000 AM   2024-04-02T05:57:10.270270-04:00     4/2/24 5:57:09.000 AM   02-APR-2024 05:57:09 * service_update * cpdb11 * 0     4/2/24 5:57:09.000 AM   02-APR-2024 05:57:09 * service_update * cpdb11 * 0     4/2/24 5:57:08.000 AM   TNS-12505: TNS:listener does not currently know of SID given in connect descriptor     4/2/24 5:57:08.000 AM   02-APR-2024 05:57:08 * (CONNECT_DATA=(SID=pdb09)(CID=(PROGRAM=perl)(HOST=a5071ue1plora04)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.18.76.29)(PORT=53098)) * establish * pdb09 * 12505     4/2/24 5:57:08.000 AM   TNS-12505: TNS:listener does not currently know of SID given in connect descriptor     4/2/24 5:57:08.000 AM   02-APR-2024 05:57:08 * (CONNECT_DATA=(SID=pdb09)(CID=(PROGRAM=perl)(HOST=a5071ue1plora04)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.18.76.29)(PORT=53096)) * establish * pdb09 * 12505     4/2/24 5:57:08.000 AM   2024-04-02T05:57:08.619205-04:00
index="mulesoft" applicationName="api" |spath content.payload{} |mvexpand content.payload{}| transaction correlationId | rename "content.payload{}.AP Import flow processing results{}.requestID" as... See more...
index="mulesoft" applicationName="api" |spath content.payload{} |mvexpand content.payload{}| transaction correlationId | rename "content.payload{}.AP Import flow processing results{}.requestID" as RequestID "content.payload{}.GL Import flow processing results{}.impConReqId" as ImpConReqId content.payload{} as response |eval OracleRequestId="RequestID: ".if(isnull(RequestID),0,RequestID)." ImpConReqId: ".if(isnull(ImpConReqId),0,ImpConReqId)|table OracleRequestId response
Please share your sample event in a code block </> not an image of them? Also, what settings do you currently have? I am assuming you are looking to do this at ingest time rather than search time, ... See more...
Please share your sample event in a code block </> not an image of them? Also, what settings do you currently have? I am assuming you are looking to do this at ingest time rather than search time, please clarify?
Hi, I am working in a distributed Splunk environment with one search head and an indexer cluster. I am trying to monitor a path that is on the search head and I created a monitor input from the web... See more...
Hi, I am working in a distributed Splunk environment with one search head and an indexer cluster. I am trying to monitor a path that is on the search head and I created a monitor input from the web GUI. How do I create an index on the indexer cluster and configure forwarding from the search head to the indexer cluster. Please help me. Thanks   
What is the full search?
index="mulesoft" applicationName="api" |spath content.payload{} |mvexpand content.payload{}| transaction correlationId | rename "content.payload{}.AP Import flow processing results{}.requestID" as... See more...
index="mulesoft" applicationName="api" |spath content.payload{} |mvexpand content.payload{}| transaction correlationId | rename "content.payload{}.AP Import flow processing results{}.requestID" as RequestID "content.payload{}.GL Import flow processing results{}.impConReqId" as ImpConReqId content.payload{} as response |eval OracleRequestId="RequestID: ".if(isnull(RequestID),0,RequestID)." ImpConReqId: ".if(isnull(ImpConReqId),0,ImpConReqId)|table OracleRequestId response
What is your current search?
What about if you set the initial value as well as the default value?
Hi @ITWhisperer  Yes its working i used isnull before the field values its working .But in below senario its not showing any values. Out of three there are two null values in impConReqId.so its not ... See more...
Hi @ITWhisperer  Yes its working i used isnull before the field values its working .But in below senario its not showing any values. Out of three there are two null values in impConReqId.so its not showing any values in table AP Import flow related results : Extract has no AP records to Import into Oracle { "GL Import flow processing results" : [ { "concurBatchId" : "463", "batchId" : "6393", "count" : "1000", "impConReqId" : null, "errorMessage" : null, "filename" : "81711505038.csv" }, { "concurBatchId" : "463", "batchId" : "6393", "count" : "1000", "impConReqId" : null, "errorMessage" : null, "filename" : "11505038.csv" }, { "concurBatchId" : "463", "batchId" : "6393", "count" : "1000", "impConReqId" : null, "errorMessage" : null, "filename" : "CONCUR_GLJE_37681711505038.csv" }, { "concurBatchId" : "463", "batchId" : "6393", "count" : "768", "impConReqId" : "101539554", "errorMessage" : null, "filename" : "711505038.csv" } ] }  
Hi @kiran_panchavat  @richgalloway  thankyou for your response! Full error message: Error message : [Indexer-x] The search process with search_id="remote_ip-<xxx>" may have returned partial resul... See more...
Hi @kiran_panchavat  @richgalloway  thankyou for your response! Full error message: Error message : [Indexer-x] The search process with search_id="remote_ip-<xxx>" may have returned partial results. Try running your search again. If you see this error repeatedly, review search.log for details or contact your Splunk administrator. We implemented smartstore for a high volume index a few weeks ago and have since been experiencing issues with search and replication factors not meeting for smartstore enabled indexes. We raised a support case with Splunk about this and they informed us that it is a known bug with no fix version available. We hadn't seen this issue before last week, but it's been showing up in most searches for the past 3-4 days (with smartstore index). Furthermore, we are experiencing search performance issues on the smartstore index; either it takes a long time to fetch results or the search gets stuck if we run the query for more than 24 hours. index configuration: [smartstore_index] frozenTimePeriodInSecs = 15552000 repFactor = auto maxDataSize = 1000 maxHotBuckets = 30 hotlist_recency_secs = 86400maxGlobalRawDataSizeMB = 62914560 homePath = /data/smartstore_index/db coldPath = /data/smartstore_index/colddb thawedPath = /data/smartstore_index/thaweddb remotePath = volume:remote_store/smartstore_index Approx daily ingestion on index: 2TB per day. Local SSD volume size: 16TB Remote location: S3 bucket We're not sure, if we're receiving this error because of search & replication factor issue, Request help to fix.
I am inputting my regex in PCRE format and it's still not working. I am trying to exclude all static resources in one Regex. (?i).*\.(jpeg|jpg|png|gif|jpeg|pdf|txt|js|html|tff|css|svg|png|pdf|dll|y... See more...
I am inputting my regex in PCRE format and it's still not working. I am trying to exclude all static resources in one Regex. (?i).*\.(jpeg|jpg|png|gif|jpeg|pdf|txt|js|html|tff|css|svg|png|pdf|dll|yml|yaml|ico|env|gz|bak)$
Hello Everyone, We have installed Splunk Enterprise on individual servers for each individual Splunk component in temporary site and moved it to permanent location. In temporary site we have named... See more...
Hello Everyone, We have installed Splunk Enterprise on individual servers for each individual Splunk component in temporary site and moved it to permanent location. In temporary site we have named indexers and cluster master with "-t" (i.e., xxlsplunkcm1-t & xxlsplunkidx01-t,02-t) indicating the temporary. Since the Indexers are physical servers we are building the new physical Servers and naming them as xxlsplunkidx01 & 02 respectively. We want to rename the Cluster master as xxlsplunkcm1. Can you help me, what are the validation step and pre check to be made in order to have a healthy Splunk environment.
1. Your problem is not clearly specified. You might want to find out how many users are logged in at some given point in time or which ones are logged in (also possibly counting or not duplicate logi... See more...
1. Your problem is not clearly specified. You might want to find out how many users are logged in at some given point in time or which ones are logged in (also possibly counting or not duplicate logins). 2. Do you have a separate login and logout events? 3. Remember that as you're logging only login and logout events you won't find sessions which "overlap" your search time range. For example - if your user logged in at 9am and logged out at 12pm you won't find this session if you only search through 10am-11am because you have no events regarding this session during that time range. (this problem can be alleviated for specific use cases by using summary indexing).
The below 2 commands are not working  | `histogram("event.Properties.duration", 31)` bin "$var$" bins=$bins$ | stats count by "$var$" | makecontinuous "$var$" | fillnull count   What type of grap... See more...
The below 2 commands are not working  | `histogram("event.Properties.duration", 31)` bin "$var$" bins=$bins$ | stats count by "$var$" | makecontinuous "$var$" | fillnull count   What type of graph or visualization would you like to create? Just want to create a dashboard tile to show the metric 
Hello I've sample data with exactly 10 characters with the combination of alphabet (2-4 characters) followed by spaces (2-4 characters) and number (2-4 characters). Refer table for sample field valu... See more...
Hello I've sample data with exactly 10 characters with the combination of alphabet (2-4 characters) followed by spaces (2-4 characters) and number (2-4 characters). Refer table for sample field values. underscore represents a space. SI.No ID 1 ABCD__1234 2 AB____1234 3 ABCD___123 4 ABCDE__123   In dashboard, I've got filter for ID. The requirement is user can enter single or two spaces between the ABCD and 1234 in the filter box. By passing this token value with irrespective space values we need to fetch the results. Thank you.  
Remember that after each step in your processing pipeline you get only those restults from the immediately preceeding command. So if you do all those | where commands in a row, first one will filter ... See more...
Remember that after each step in your processing pipeline you get only those restults from the immediately preceeding command. So if you do all those | where commands in a row, first one will filter out all those results for which the getperct wasnt more than 50, the second one will filter out (of those remaining after first where) those that do not fit the next condition and so on. So your three wheres in a row are equivalent to | where getperct>50 AND putperct>10 AND deleteperct>80 but you want at least one of those condiitons fulfilled so you want | where (getperct>50) OR (putperct>10> OR (deleteperct>80)    
Hello, Thank you for your answer! I made sure that all the points you mentioned are correctly implemented and also checked the documentation you sent. I fixed the problem by enabling the indexing o... See more...
Hello, Thank you for your answer! I made sure that all the points you mentioned are correctly implemented and also checked the documentation you sent. I fixed the problem by enabling the indexing on the Heavy Forwarder and now the client is appearing in it's fowarder management UI aswell. However, it's still showing in the other instances (Manager Server, Indexers etc.) aswell. Also, I don't want to turn on Indexing on the Heavy Forwarder, to not index data, is there a way to avoid enabling it and still get the client showing on the UI? It's a real pain bug i hope they fix it
Hi @purcell12491 , could you beter describe your requirement: operative systems, fields used, etc...? Ciao. Giuseppe
Hi @anandhalagaras1, regex substitution is correct. Are you sure about the sourcetype? is there any sourcetype replacement in your data? are there some other Heavy Forwarders  before the one you ... See more...
Hi @anandhalagaras1, regex substitution is correct. Are you sure about the sourcetype? is there any sourcetype replacement in your data? are there some other Heavy Forwarders  before the one you used for the masking? Ciao. Giuseppe