All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Here is a actual  problem sample... good to see by outlier with -9.4
Hi @gcusello  I will check again. I also used batched results and still did not see any data. This is why I am not narrowing my focus on the rising column but I will evaluate further and ensure th... See more...
Hi @gcusello  I will check again. I also used batched results and still did not see any data. This is why I am not narrowing my focus on the rising column but I will evaluate further and ensure there are no errors with the rising column.  Thanks!
@yuanliu  Thank you for your help. I accepted your suggestion as solution with the following note: - sendemail didn't work because I wasn't an admin - Using alert worked just fine Can you clari... See more...
@yuanliu  Thank you for your help. I accepted your suggestion as solution with the following note: - sendemail didn't work because I wasn't an admin - Using alert worked just fine Can you clarify what you meant "join" will get me nowhere?   The result using JOIN worked just fine My intention is to "join" the data, not to "append". When I  used APPEND, the data was appended to the original data and I had to use "stats command" to merge the data.   Thanks
So all 3 files are picked up by this one monitor stanza? Are the files all truly the same format aka "sourcetype"? Can you explain a bit more about why we omitting just one file?  What can we us... See more...
So all 3 files are picked up by this one monitor stanza? Are the files all truly the same format aka "sourcetype"? Can you explain a bit more about why we omitting just one file?  What can we use to uniquely identify this particular source? the host? Sounds like it has to be source + something else to make it unique.  If you cant differentiate them at the source, then perhaps something like ingest_eval or a "sourcetype rename" is needed.  Seems to me you might just be overloading the config...I mean maybe just dont deploy an input that picks up this file in prod? thats why i asked if they truly are all the same sourcetype/format....
Hi matty, Thanks for your quick response. the lab and prod file paths are the same - yes , but sourcetype name is different for prod and stage i can't pass sourcetype in props because three log fi... See more...
Hi matty, Thanks for your quick response. the lab and prod file paths are the same - yes , but sourcetype name is different for prod and stage i can't pass sourcetype in props because three log files are part of one sourcetype and among that i am restricting one log file - but i want all three logs in file stage. Also are you on-prem or cloud? - on-Prem What does your inputs.conf stanza look like? [monitor:<path>] sourcetype= <sourcetype name> Thanks
Dont work windows commands nothing happens msiexec.exe /I Splunk.msi SPLUNKUSERNAME=SplunkAdmin SPLUNKPASSWORD=MyNewPassword /quiet  
Do you have a SPL Code hint for me?
This is part of the splunkd health report. It is configured in health.conf Would suggest reviewing if this "forwarder" is sending old files or actually is falling behind or have some clean up nee... See more...
This is part of the splunkd health report. It is configured in health.conf Would suggest reviewing if this "forwarder" is sending old files or actually is falling behind or have some clean up needed on its ingestion tracker values.  
Hi!  If I am following your question, you are concerned because the lab and prod file paths are the same? You are not required to set the source path to the file in props.conf to get the desired ... See more...
Hi!  If I am following your question, you are concerned because the lab and prod file paths are the same? You are not required to set the source path to the file in props.conf to get the desired outcome. If your sourcetype is being set in your inputs that pick up this file, you can simply configure the props to match on the sourcetype to do the processing.  Also I don't think you want to duplicate the stanza names in transforms.conf ie. [setnull] is named twice. could lead to unintended consequences.  What does your inputs.conf stanza look like? How are you sending this file (UF to indexers? Uf to HF to idx?) Also are you on-prem or cloud? I ask because "Ingest Actions" (and other solutions like ingest or edge processor) provides a UI for you to do this to help validate and avoid config mistakes.  Regardless, please always test your configs in a local lab environment to avoid having a bad day
Hello , i have a common log file which same name in both production and stage with different name for sourcetype. As i don't want that logs to be ingested from Production i have added below entry... See more...
Hello , i have a common log file which same name in both production and stage with different name for sourcetype. As i don't want that logs to be ingested from Production i have added below entry in props.conf. [source::<Log file path>] Transforms-null= setnull   transforms.conf [setnull] REGEX = BODY DEST_KEY = queue FORMAT = nullQueue [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue   But i want same log file from stage and not from production - in props.conf adding the sourctype of prod will restrict the logs from production and ingest the logs from stage where sourcetype name is different?? [source::<Log file path>] [sourcetype = <Prod Sourcetype>] Transforms-null= setnull   in Addition - Prod Source Type i have other two logs and i don't want that get stopped because of this configuration changes. Thanks
Thank you repling, Rick. I don't little write English. But I'll Challange. I misstake Splunk version. Not 7.3 but 9.3.1 Why do my splunkd loopbacked? I install splunk-9.3.1-0b8d769cb912-x64-rel... See more...
Thank you repling, Rick. I don't little write English. But I'll Challange. I misstake Splunk version. Not 7.3 but 9.3.1 Why do my splunkd loopbacked? I install splunk-9.3.1-0b8d769cb912-x64-release.msi and I don't change settings perhaps. In this server I hit netstat. the result is next. C:\>netstat -an -p tcp アクティブな接続 プロトコル ローカル アドレス 外部アドレス 状態 TCP 0.0.0.0:80 0.0.0.0:0 LISTENING TCP 0.0.0.0:88 0.0.0.0:0 LISTENING TCP 0.0.0.0:135 0.0.0.0:0 LISTENING TCP 0.0.0.0:389 0.0.0.0:0 LISTENING TCP 0.0.0.0:443 0.0.0.0:0 LISTENING TCP 0.0.0.0:445 0.0.0.0:0 LISTENING TCP 0.0.0.0:464 0.0.0.0:0 LISTENING TCP 0.0.0.0:593 0.0.0.0:0 LISTENING TCP 0.0.0.0:636 0.0.0.0:0 LISTENING TCP 0.0.0.0:3268 0.0.0.0:0 LISTENING TCP 0.0.0.0:3269 0.0.0.0:0 LISTENING TCP 0.0.0.0:4112 0.0.0.0:0 LISTENING TCP 0.0.0.0:4430 0.0.0.0:0 LISTENING TCP 0.0.0.0:4649 0.0.0.0:0 LISTENING TCP 0.0.0.0:5985 0.0.0.0:0 LISTENING TCP 0.0.0.0:8000 0.0.0.0:0 LISTENING TCP 0.0.0.0:8080 0.0.0.0:0 LISTENING TCP 0.0.0.0:8089 0.0.0.0:0 LISTENING TCP 0.0.0.0:8191 0.0.0.0:0 LISTENING TCP 0.0.0.0:9389 0.0.0.0:0 LISTENING TCP 0.0.0.0:47001 0.0.0.0:0 LISTENING TCP 0.0.0.0:49664 0.0.0.0:0 LISTENING TCP 0.0.0.0:49665 0.0.0.0:0 LISTENING TCP 0.0.0.0:49666 0.0.0.0:0 LISTENING TCP 0.0.0.0:49667 0.0.0.0:0 LISTENING TCP 0.0.0.0:49668 0.0.0.0:0 LISTENING TCP 0.0.0.0:49670 0.0.0.0:0 LISTENING TCP 0.0.0.0:49671 0.0.0.0:0 LISTENING TCP 0.0.0.0:49672 0.0.0.0:0 LISTENING TCP 0.0.0.0:49674 0.0.0.0:0 LISTENING TCP 0.0.0.0:49677 0.0.0.0:0 LISTENING TCP 0.0.0.0:49681 0.0.0.0:0 LISTENING TCP 0.0.0.0:49697 0.0.0.0:0 LISTENING TCP 0.0.0.0:51142 0.0.0.0:0 LISTENING TCP 0.0.0.0:62000 0.0.0.0:0 LISTENING TCP 127.0.0.1:53 0.0.0.0:0 LISTENING TCP 127.0.0.1:8000 127.0.0.1:59455 ESTABLISHED TCP 127.0.0.1:8000 127.0.0.1:59484 ESTABLISHED TCP 127.0.0.1:8065 0.0.0.0:0 LISTENING TCP 127.0.0.1:8089 127.0.0.1:60730 ESTABLISHED TCP 127.0.0.1:8089 127.0.0.1:62099 TIME_WAIT TCP 127.0.0.1:8191 127.0.0.1:53438 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53439 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53443 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53448 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53501 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53504 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53506 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53508 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53509 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53510 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53511 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53512 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:58525 ESTABLISHED TCP 127.0.0.1:53422 0.0.0.0:0 LISTENING TCP 127.0.0.1:53422 127.0.0.1:53473 ESTABLISHED TCP 127.0.0.1:53426 0.0.0.0:0 LISTENING TCP 127.0.0.1:53438 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53439 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53443 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53448 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53473 127.0.0.1:53422 ESTABLISHED TCP 127.0.0.1:53501 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53504 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53506 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53508 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53509 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53510 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53511 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53512 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:58525 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:59455 127.0.0.1:8000 ESTABLISHED TCP 127.0.0.1:59484 127.0.0.1:8000 ESTABLISHED TCP 127.0.0.1:60730 127.0.0.1:8089 ESTABLISHED TCP 127.0.0.1:61987 127.0.0.1:8089 TIME_WAIT TCP 192.168.0.8:53 0.0.0.0:0 LISTENING TCP 192.168.0.8:139 0.0.0.0:0 LISTENING TCP 192.168.0.8:445 192.168.0.1:51760 ESTABLISHED TCP 192.168.0.8:445 192.168.0.44:59017 ESTABLISHED TCP 192.168.0.8:4649 192.168.0.44:59008 ESTABLISHED TCP 192.168.0.8:58220 20.198.118.190:443 ESTABLISHED TCP 192.168.0.8:59051 20.194.180.207:443 ESTABLISHED TCP 192.168.0.8:59103 3.216.246.128:443 ESTABLISHED TCP 192.168.0.8:59125 50.16.88.233:443 ESTABLISHED TCP 192.168.0.8:59149 54.228.78.235:443 ESTABLISHED TCP 192.168.0.8:59174 151.101.193.140:443 ESTABLISHED TCP 192.168.0.8:59204 151.101.193.140:443 ESTABLISHED TCP 192.168.0.8:59207 35.186.194.58:443 ESTABLISHED TCP 192.168.0.8:59218 151.101.193.140:443 ESTABLISHED TCP 192.168.0.8:59261 34.149.224.134:443 ESTABLISHED TCP 192.168.0.8:59275 151.101.228.157:443 ESTABLISHED TCP 192.168.0.8:59297 54.228.78.235:443 ESTABLISHED TCP 192.168.0.8:59301 151.101.129.181:443 TIME_WAIT TCP 192.168.0.8:59507 184.72.249.85:443 ESTABLISHED TCP 192.168.0.8:60773 104.26.13.205:443 TIME_WAIT TCP 192.168.0.8:60785 23.50.118.133:443 ESTABLISHED TCP 192.168.0.8:60829 34.107.204.85:443 TIME_WAIT TCP 192.168.0.8:60851 13.225.183.97:443 ESTABLISHED TCP 192.168.0.8:60887 172.66.0.227:443 TIME_WAIT TCP 192.168.0.8:60994 18.154.132.17:443 TIME_WAIT TCP 192.168.0.8:61016 34.66.73.214:443 ESTABLISHED TCP 192.168.0.8:61027 3.226.63.48:443 ESTABLISHED TCP 192.168.0.8:61047 35.186.224.24:443 ESTABLISHED TCP 192.168.0.8:61050 34.117.162.98:443 TIME_WAIT TCP 192.168.0.8:61074 34.111.113.62:443 ESTABLISHED TCP 192.168.0.8:61099 107.178.240.89:443 ESTABLISHED TCP 192.168.0.8:61108 35.244.154.8:443 ESTABLISHED TCP 192.168.0.8:61109 107.178.254.65:443 ESTABLISHED TCP 192.168.0.8:61111 34.98.64.218:443 ESTABLISHED TCP 192.168.0.8:61184 20.198.118.190:443 ESTABLISHED TCP 192.168.0.8:61212 151.101.1.140:443 ESTABLISHED TCP 192.168.0.8:61412 35.163.74.134:443 ESTABLISHED TCP 192.168.0.8:61452 35.163.74.134:443 ESTABLISHED TCP 192.168.0.8:61986 65.9.42.42:443 TIME_WAIT TCP 192.168.0.8:62010 65.9.42.42:443 TIME_WAIT TCP 192.168.0.8:62030 65.9.42.42:443 TIME_WAIT TCP 192.168.0.8:62043 65.9.42.42:443 TIME_WAIT TCP 192.168.0.8:62056 65.9.42.28:443 TIME_WAIT TCP 192.168.0.8:62079 192.168.0.8:443 TIME_WAIT TCP 192.168.0.8:62080 192.168.0.8:62000 TIME_WAIT TCP 192.168.0.8:62082 65.9.42.62:443 TIME_WAIT TCP 192.168.0.8:62098 65.9.42.62:443 TIME_WAIT TCP 192.168.0.8:62103 13.107.21.239:443 ESTABLISHED TCP 192.168.0.8:62104 13.107.21.239:443 ESTABLISHED TCP 192.168.0.8:62117 65.9.42.62:443 TIME_WAIT Why is 80xx ports "ESTABLISHED"? It must appear "LISTENING", don't it? How can I change the status? Tell me please. Thank you.
To send specific notable events from the Enterprise Security Incident Review page for investigation, an add-on called the ServiceNow Security Operations Add-on is available. This add-on allows Splunk... See more...
To send specific notable events from the Enterprise Security Incident Review page for investigation, an add-on called the ServiceNow Security Operations Add-on is available. This add-on allows Splunk ES analysts to create security-related incidents and events in ServiceNow. It features on-demand single ServiceNow event or incident creation from Splunk Event Scheduled Alerts, enabling the creation of both single and multiple ServiceNow events and incidents. For Detailed integrations steps refer The reverse integration between ServiceNow and Splunk for incident management can be achieved using an out-of-the-box method.  If this reply is helpful, karma would be appreciated  .
Hi @Nicolas2203 , ok, good for you, let me know, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Ahhh... the SOURCE_KEY part I missed   Good catch!
Hello, I just checked, and the Microsoft Cloud Services manage checkpoints locally on heavy forwarders. However, there is a configuration in the app that allows you to store checkpoints in a contain... See more...
Hello, I just checked, and the Microsoft Cloud Services manage checkpoints locally on heavy forwarders. However, there is a configuration in the app that allows you to store checkpoints in a container within an Azure storage account. This way, when you need to start log collection on another heavy forwarder, it could facilitate the process. Will configure that and test, I let you know ! Thanks Nico
The IP address keeps changing with the same error. Forwarder Ingestion Latency Cause(s) d'origine : Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 272... See more...
The IP address keeps changing with the same error. Forwarder Ingestion Latency Cause(s) d'origine : Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 272246. Message from D97C3DE9-B0CE-408F-9620-5274BAC12C72:192.168.1.191:50409 How do you solve the problem?
Notable creation as ServiceNow Incident:- The reverse integration between ServiceNow and Splunk for incident management can be achieved using an out-of-the-box method. To send specific notable... See more...
Notable creation as ServiceNow Incident:- The reverse integration between ServiceNow and Splunk for incident management can be achieved using an out-of-the-box method. To send specific notable events from the Enterprise Security Incident Review page for investigation, an add-on called the ServiceNow Security Operations Add-on is available. This add-on allows Splunk ES analysts to create security-related incidents and events in ServiceNow. It features on-demand single ServiceNow event or incident creation from Splunk Event Scheduled Alerts, enabling the creation of both single and multiple ServiceNow events and incidents. Another approach is to customize the Splunk Add-on for ServiceNow by modifying the /opt/splunk/etc/apps/Splunk_TA_snow/local/alert_actions.conf file with the following configuration, which should be applied to your deployer and pushed to your Search Head Cluster (SHC):     [snow_incident] param._cam = {\ "category": ["others"],\ "task": ["others"],\ "subject": ["others"],\ "technology": [{"vendor": "unknown", "product": "unknown"}],\ "supports_adhoc": true\ } param.state = 1 param.correlation_id = $job.sid$ param.configuration_item = splunk param.contact_type = param.assignment_group = param.category = param.subcategory = param.account = splunk_integration param.short_description =   All the param.* fields can be hardcoded in this configuration file to prepopulate the ad hoc invocation, if that is your preference. If you need any further assistance, please let me know. Note: Using both add-ons will facilitate sending notables to the ServiceNow Incident Review. 2. Notable Closure:- Updating Splunk Notables When Incidents Are Opened or Closed in ServiceNow (Need to Configure at ServiceNow) Step 1: Create an Outbound REST Message in ServiceNow Navigate to System Web Services > Outbound > REST Message in your ServiceNow instance. Click New to create a new REST message. Name the message and specify the endpoint, which should be the URL of your Splunk instance. Step 2: Define HTTP Methods In the new REST message, go to the HTTP Methods related list. Create a new record and select the appropriate HTTP method (usually POST). In the Endpoint field, add the specific API endpoint for updating notables. Step 3: Define Headers and Parameters If your Splunk instance requires specific headers or parameters, define them in this step. For example, you may need to set authentication headers or other required parameters. Step 4: Create a Business Rule Navigate to System Definition > Business Rules in ServiceNow. Create a new business rule: Set the table to Incident. Define the conditions to trigger the rule, typically "After" an insert or update when the incident state changes to "Closed." In the Advanced tab, write a script to send the REST message when the specified conditions are met. Here’s a sample script:   // Sample script to send the REST message var restMessage = new sn_ws.RESTMessageV2(); restMessage.setHttpMethod('POST'); // or 'PUT' restMessage.setEndpoint('https://your-splunk-instance/api/update_notables'); // Update with your endpoint restMessage.setRequestHeader('Content-Type', 'application/json'); restMessage.setRequestHeader('Authorization', 'Bearer your_api_token'); // If required var requestBody = { "incident_id": current.sys_id, "state": current.state, // Add other relevant fields here }; restMessage.setRequestBody(JSON.stringify(requestBody)); var response = restMessage.execute(); var responseBody = response.getBody(); var httpStatus = response.getStatusCode(); // Handle the response as needed   Step 5: Test the Integration Close an incident in ServiceNow and verify whether the corresponding alert is also closed in Splunk. Ensure that you replace 'Your REST Message' and 'Your HTTP Method' with the actual names you provided when creating the REST message. Adjust parameters and headers as required by your Splunk instance's API. Additional Configuration To properly configure the REST call for updating notables in Splunk, ensure you pass the necessary parameters and headers, particularly the ruleID as  mentioned in below document. NotableEventAPIreference /services/notable_update.  Splunk Notable Update Endpoint Endpoint URL:    https://<host>:<mPort>/services/notable_update​   HTTP Method: POST If this reply is helpful, karma would be appreciated .
You might be better off using eventstats to add the average to all the events, then use the where command to keep the events you want to delete, then remove the average field (with the fields command... See more...
You might be better off using eventstats to add the average to all the events, then use the where command to keep the events you want to delete, then remove the average field (with the fields command) before deleting the events.
Hi, delete ist not a mus have... to exclude the vaulty results to the search is another option... My logig: timechart avg > get the avg min and avg max from this timechart > exclude events with t... See more...
Hi, delete ist not a mus have... to exclude the vaulty results to the search is another option... My logig: timechart avg > get the avg min and avg max from this timechart > exclude events with the min max avg > new timechart