All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Dear All, I have one index and I use this index to store messages and summary report as well. In report="report_b", it stores the running case name and the used device id(DEV_ID) in timestamp _time... See more...
Dear All, I have one index and I use this index to store messages and summary report as well. In report="report_b", it stores the running case name and the used device id(DEV_ID) in timestamp _time. ex. _time DEV_ID case_name case_action 01:00 111 ping111.py start 01:20 111 ping111.py end 02:00 222 ping222.py start 02:30 222 ping222.py end 02:40 111 ping222.py start 03:00 111 ping222.py end   For Message_Name="event_a",  it is stored in index=A as below: _time LOG_ID Message_Name 01:10 01 event_a 02:50 02 event_a I would like to associate the case that is running when the event_a is sent. So I use the code below: Firstly, to find out the device id(DEV_ID) associated with this log(LOG_ID)  Secondly, to associate event_a and case_name by DEV_ID Finally, list those event_a only.   (index=A Message_Name="event_a") OR (index=A report="report_b") | lookup table_A.csv LOG_ID OUTPUT DEV_ID | sort 0 + _time | streamstats current=false last(case_name) as last_case_name, , last(case_action) as last_case_action by DEV_ID | eval case_name=if(isnull(case_name) AND last_case_action="start",last_case_name,case_name) | where isnotnull(Message_Name) | table _time Message_Name LOG_ID DEV_ID case_name     The output would be: _time Message_Name LOG_ID DEV_ID case_name 01:10 event_a 01 111 ping111.py 02:50 event_a 02 111 ping222.py   The code works fine but the amount of data is huge so the lookup command takes a very long time.  Furthermore, actually, it is no need to apply lookup command for report="report_b". (index=A Message_Name="event_a") : 150000 records in 24 hour (index=A report="report_b") : 700000 records in 24 hour Is there any way to rewrite the code to make lookup only apply on events belongs to (index=A Message_Name="event_a") ? try to use subsearch, append, appendpipe to restrict find associated DEV_ID first but not working.   Thank you so much.
The command you are looking for is still eventstats. index=zzzzzz | stats count as Total, count(eval(txnStatus="FAILED")) as "Failed_Count", count(eval(txnStatus="SUCCEEDED")) as "Passed_Count" by c... See more...
The command you are looking for is still eventstats. index=zzzzzz | stats count as Total, count(eval(txnStatus="FAILED")) as "Failed_Count", count(eval(txnStatus="SUCCEEDED")) as "Passed_Count" by country, type, ProductCode, errorinfo | eventstats sum(Total) as Total | fields country, ProductCode, type, Failed_Count, Passed_Count, errorinfo, Total It's all about how you group the results. 
Thank @etoombs It worked.
Hi @Thulasiraman, "search" command  is not using another field value. Please try with "where" command like below, index="jenkins" sourcetype="json:jenkins" job_name="$env$_Group*" event_tag=job_eve... See more...
Hi @Thulasiraman, "search" command  is not using another field value. Please try with "where" command like below, index="jenkins" sourcetype="json:jenkins" job_name="$env$_Group*" event_tag=job_event type=completed | eval rerunGroup = case("$group$"=="Group06", "*Group01*", "$group$"=="Group07", "*Group02*", "$group$"=="Group08", "*Group03*", "$group$"=="Group09", "*Group04*", "$group$"=="Group10", "*Group05*",1==1, "???") |''' table rerunGroup - This shows Group01 in the table ''' | where job_name="*$group$*" OR job_name=rerunGroup | head 2 | dedup build_number | stats sum(test_summary.passes) as Pass | fillnull value="Test Inprogress..." Pass
I have an Add-On which has defined a new data input. Via the UI, I can easily create new instances of the same input .. but I want to create them programmatically, via Python. The inputs.conf.spec fi... See more...
I have an Add-On which has defined a new data input. Via the UI, I can easily create new instances of the same input .. but I want to create them programmatically, via Python. The inputs.conf.spec file for the app contains: [strava_api://<name>] interval = index = access_code = start_time = reindex_data = These are the values I am prompted for when using the UI manually ... but, how do I achieve the same result via Python (using python3)? I have the Python SDK installed .. but I just dont know where to start and would appreciate some inspiration from the Community. Thanks in advance for any comments/tips/solutions (For those with a keen eye, the Add-On in question is Strava for Splunk )
i have installed the deployment server where configured required inputs.conf and outputs.conf for ingesting logs from UF to my indexer  on the deployment-app of deployment server, and configured the ... See more...
i have installed the deployment server where configured required inputs.conf and outputs.conf for ingesting logs from UF to my indexer  on the deployment-app of deployment server, and configured the UF forwarder to the deployment server, i found the new host details in the deployment server under the forward management tab. And pointed to appropriate classes and apps from the forward-management  configuration. Still I'm not able to get the logs on the indexer. when i see the error on the Splunkd log of uf. Find the below error   11-20-2023 17:25:40.602 +0400 ERROR X509Verify - X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) failed validation; error=7, reason="certificate signature failure" 11-20-2023 17:25:40.602 +0400 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read server certificate B', alert_description='decrypt error'. 11-20-2023 17:25:40.602 +0400 WARN HttpPubSubConnection - Unable to parse message from PubSubSvr: 11-20-2023 17:25:40.602 +0400 INFO HttpPubSubConnection - Could not obtain connection, will retry after=86.429 seconds. 11-20-2023 17:25:40.778 +0400 INFO WatchedFile - Will begin reading at offset=2295058 for file='/var/log/audit/audit.log'. 11-20-2023 17:25:46.129 +0400 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='/var/log/anaconda/ks-script-lk6ot_yw.log'. 11-20-2023 17:25:46.130 +0400 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='/var/log/anaconda/ks-script-wo9l091q.log'. 11-20-2023 17:25:49.856 +0400 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 11-20-2023 17:26:01.857 +0400 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 11-20-2023 17:26:07.910 +0400 INFO ScheduledViewsReaper - Scheduled views reaper run complete. Reaped count=0 scheduled views 11-20-2023 17:26:07.914 +0400 INFO TcpOutputProc - Removing quarantine from idx=192.168.1.5:9997 11-20-2023 17:26:07.914 +0400 INFO TcpOutputProc - Removing quarantine from idx=192.168.1.6:9997 11-20-2023 17:26:07.921 +0400 ERROR TcpOutputFd - Read error. Connection reset by peer 11-20-2023 17:26:07.928 +0400 ERROR TcpOutputFd - Read error. Connection reset by peer
Hey @ITWhisperer  Thanks for the detailed and helpful response. This looks promising. I will try this out and will update the thread with further findings.
The version of Splunk I'm currently using is 9.1.1
hello Splunk team, As picture, I found UI duplication problem in selecting data type module. I tested different browsers and found this problem in all of them. I suspect it is caused by the wrong de... See more...
hello Splunk team, As picture, I found UI duplication problem in selecting data type module. I tested different browsers and found this problem in all of them. I suspect it is caused by the wrong design of Splunk UI, can you help solve this problem? Thank you!  
But then I dont get the individual Totals if I do that along with the message.
@merrelr You can make that object visible in all apps by exporting it to system    
I have a singlevalue trellis view showing the status of items up (Green) and down (Red). When the status is down (Red), i would like to get the trellis view to flash or blink. I found below htm... See more...
I have a singlevalue trellis view showing the status of items up (Green) and down (Red). When the status is down (Red), i would like to get the trellis view to flash or blink. I found below html code for blinking effect <html> <style> @keyframes blink { 100%, 0% { opacity: 0.6; } 60% { opacity: 0.9; } } #singlevalue rect { animation: blink 0.8s infinite; } </style> </html> However, it will make all singlevalue are blinking. I am using rangemap to make the background of singlevalue in Red/Green color. How can I specify the trellis will blink in Red color only? Thanks a lot.
I'm trying to make SOC Use cases clear, concise, and easy to find later. It is possible to make a threat detection use case based on MITRE, but I guess SOC is not the only threat detection. There are... See more...
I'm trying to make SOC Use cases clear, concise, and easy to find later. It is possible to make a threat detection use case based on MITRE, but I guess SOC is not the only threat detection. There are many other requirements such as compliance and business use cases. What approach should be more effective and right? Here are my questions. Use Case Development: - Best practices for effective SOC use cases and recommended frameworks? Documentation and Knowledge Management: - Strategies/tools for organizing SOC use cases for searchability? Continuous Improvement: - Methods for improving and updating SOC use cases over time? - Can you share examples of how penetration testing results have influenced the development of SOC use cases? Risk Assessment Integration: - How do you align SOC use cases with risk levels identified in risk assessments? - Are there specific metrics or indicators from risk assessments that should be incorporated into SOC use cases? - What best practices do you suggest for regularly reviewing and updating SOC use cases based on changes in risk assessments?
Figured it out. | makeresults | eval dest="url1,url2,url3", dest_port=8443, dest = split (dest,",") | mvexpand dest | `sslcert(dest, dest_port)` | lookup sslcert_lookup dest, dest_port OUTPUT ss... See more...
Figured it out. | makeresults | eval dest="url1,url2,url3", dest_port=8443, dest = split (dest,",") | mvexpand dest | `sslcert(dest, dest_port)` | lookup sslcert_lookup dest, dest_port OUTPUT ssl_subject_common_name ssl_subject_alt_name ssl_end_time ssl_validity_window | eval ssl_subject_alt_name = split(ssl_subject_alt_name,"|") | eval days_left = round(ssl_validity_window/86400) | table ssl_subject_common_name ssl_subject_alt_name days_left | sort days_left
Hi @AL3Z  Both locations are right.  the first one ... the \etc\system\local\inputs.conf is the default inputs.conf file path. Generally if you dont have different apps, you can use this file alone... See more...
Hi @AL3Z  Both locations are right.  the first one ... the \etc\system\local\inputs.conf is the default inputs.conf file path. Generally if you dont have different apps, you can use this file alone and specify all files for monitoring, whitelisting and blacklistings.  if you have lots of things for monitoring, it will be better to group them as "apps" and then have their config files in their particular folders. so troubleshooting will become easy.    then understanding the file precedence http://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/Wheretofindtheconfigurationfiles  
I am having an issue with the initial configuration to generate LDAP queries. In the GUI i have my settings as such  Anyone that is a splunk admin have any idea on how to make the ldap queries w... See more...
I am having an issue with the initial configuration to generate LDAP queries. In the GUI i have my settings as such  Anyone that is a splunk admin have any idea on how to make the ldap queries work? I have tried the configurating via the add on for active directory and my ldap.conf file is set as [ADS] alternatedomain = abc.bcd.com basedn = dc=ads,dc=abc,dc=com binddn = CN=SplunkUser,OU=Service Accounts,OU=User Accounts,DC=ads,DC=abc,DC=com port = 3269 server = x.x.x.x ssl = 1 however, with the test connection, it does not work. 
Hello, I'm aiming to test event blacklists on my host system locally, but I'm uncertain about the correct location within the inputs.conf file to place these blacklists. Would it be in: C:\Program... See more...
Hello, I'm aiming to test event blacklists on my host system locally, but I'm uncertain about the correct location within the inputs.conf file to place these blacklists. Would it be in: C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf C:\Program Files\SplunkUniversalForwarder\etc\apps\myapp\local\inputs.conf Thanks...
I have 3 standalone indexers, and another 3 indexers in a cluster.  We want to decommission the 3 standalones but first, have to move the data off the 3  onto the cluster.   I imagine the process wou... See more...
I have 3 standalone indexers, and another 3 indexers in a cluster.  We want to decommission the 3 standalones but first, have to move the data off the 3  onto the cluster.   I imagine the process would be something like to roll all hot buckets to warm.. then rsync the warm and cold mounts/directory to a temp directory on one of the idx cluster members? standalone 1 to idxcluster 1,, 2 to 2, then 3 to 3.. But when we do rsync the data over.. How do i get the new indexer to recognize the old imported data?  is it as simple as merging the old imported data into the appropriate index directory on the new indexer?  for example.. copy the old wineventlog index, into the same named directory on the  new indexer? would that work or is there  more to it?  Is there some kind of splunk native command to move all data from idx A to idx B? Is there a better (or correct) way to make the new idx recognize the imported data? I appreciate any help!  Thanks.
Hi @Mo.Abdelrazek, Thanks for the info. If you can share what you learned from Support as a reply here that would be awesome.
@Ryan.Paredez  I regenerated my oAuth token, and i have read AppD documents and i still see the error I open a case with the support already