All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If Splunk already extracted two Account Names, wouldn't it be simpler to call the first value and second value different names? index="wineventlog" EventCode=4726 | eval SubjectAccountName = mvindex... See more...
If Splunk already extracted two Account Names, wouldn't it be simpler to call the first value and second value different names? index="wineventlog" EventCode=4726 | eval SubjectAccountName = mvindex('Account Name', 0) | eval TargetAccountName = mvindex('Account Name', 1) Also, I remember that some says Windows events can come in as JSON.  If you have structured data, you don't need to worry about these at all.
Just note: Often times it is better to describe your use case than trying to "fix" SPL.  Are you sure it is lookup that slows the search, not sort?  Sorting large amount of data is expensive in many ... See more...
Just note: Often times it is better to describe your use case than trying to "fix" SPL.  Are you sure it is lookup that slows the search, not sort?  Sorting large amount of data is expensive in many ways while lookup is a very efficient command. If you must try to not lookup in report_b, you can append after lookup. (index=A Message_Name="event_a") | lookup table_A.csv LOG_ID OUTPUT DEV_ID append [search index=A report="report_b"] | sort 0 + _time | streamstats current=false last(case_name) as last_case_name, , last(case_action) as last_case_action by DEV_ID | eval case_name=if(isnull(case_name) AND last_case_action="start",last_case_name,case_name) | where isnotnull(Message_Name) | table _time Message_Name LOG_ID DEV_ID case_name Not sure how much this can speed search up, however.
Instead of getting search box on after splunk login, I want to make it look like some kind of welcome page like we get in splunk_secure_gateway app. I got the part of making navigation view but not a... See more...
Instead of getting search box on after splunk login, I want to make it look like some kind of welcome page like we get in splunk_secure_gateway app. I got the part of making navigation view but not able to modify the launch page to some welcome page of website. Please advise what changes I can make to some config file or xml file so i can get whatever page look i want.
@inventsekar , I believe this is the cause of issue, from the below snapshot  Creator_Process_Name                                  New_Process_Name C:\Program Files (x86)\Tanium\Tanium Client... See more...
@inventsekar , I believe this is the cause of issue, from the below snapshot  Creator_Process_Name                                  New_Process_Name C:\Program Files (x86)\Tanium\Tanium Client\TaniumClient.exe C:\Windows\System32\cmd.exe   When I excluded the creator processname tanium its newprocess name cmd.exe is also excluded.
I gave dummy URL here, but i do have one private URL where it is working fine
Dear All, I have one index and I use this index to store messages and summary report as well. In report="report_b", it stores the running case name and the used device id(DEV_ID) in timestamp _time... See more...
Dear All, I have one index and I use this index to store messages and summary report as well. In report="report_b", it stores the running case name and the used device id(DEV_ID) in timestamp _time. ex. _time DEV_ID case_name case_action 01:00 111 ping111.py start 01:20 111 ping111.py end 02:00 222 ping222.py start 02:30 222 ping222.py end 02:40 111 ping222.py start 03:00 111 ping222.py end   For Message_Name="event_a",  it is stored in index=A as below: _time LOG_ID Message_Name 01:10 01 event_a 02:50 02 event_a I would like to associate the case that is running when the event_a is sent. So I use the code below: Firstly, to find out the device id(DEV_ID) associated with this log(LOG_ID)  Secondly, to associate event_a and case_name by DEV_ID Finally, list those event_a only.   (index=A Message_Name="event_a") OR (index=A report="report_b") | lookup table_A.csv LOG_ID OUTPUT DEV_ID | sort 0 + _time | streamstats current=false last(case_name) as last_case_name, , last(case_action) as last_case_action by DEV_ID | eval case_name=if(isnull(case_name) AND last_case_action="start",last_case_name,case_name) | where isnotnull(Message_Name) | table _time Message_Name LOG_ID DEV_ID case_name     The output would be: _time Message_Name LOG_ID DEV_ID case_name 01:10 event_a 01 111 ping111.py 02:50 event_a 02 111 ping222.py   The code works fine but the amount of data is huge so the lookup command takes a very long time.  Furthermore, actually, it is no need to apply lookup command for report="report_b". (index=A Message_Name="event_a") : 150000 records in 24 hour (index=A report="report_b") : 700000 records in 24 hour Is there any way to rewrite the code to make lookup only apply on events belongs to (index=A Message_Name="event_a") ? try to use subsearch, append, appendpipe to restrict find associated DEV_ID first but not working.   Thank you so much.
The command you are looking for is still eventstats. index=zzzzzz | stats count as Total, count(eval(txnStatus="FAILED")) as "Failed_Count", count(eval(txnStatus="SUCCEEDED")) as "Passed_Count" by c... See more...
The command you are looking for is still eventstats. index=zzzzzz | stats count as Total, count(eval(txnStatus="FAILED")) as "Failed_Count", count(eval(txnStatus="SUCCEEDED")) as "Passed_Count" by country, type, ProductCode, errorinfo | eventstats sum(Total) as Total | fields country, ProductCode, type, Failed_Count, Passed_Count, errorinfo, Total It's all about how you group the results. 
Thank @etoombs It worked.
Hi @Thulasiraman, "search" command  is not using another field value. Please try with "where" command like below, index="jenkins" sourcetype="json:jenkins" job_name="$env$_Group*" event_tag=job_eve... See more...
Hi @Thulasiraman, "search" command  is not using another field value. Please try with "where" command like below, index="jenkins" sourcetype="json:jenkins" job_name="$env$_Group*" event_tag=job_event type=completed | eval rerunGroup = case("$group$"=="Group06", "*Group01*", "$group$"=="Group07", "*Group02*", "$group$"=="Group08", "*Group03*", "$group$"=="Group09", "*Group04*", "$group$"=="Group10", "*Group05*",1==1, "???") |''' table rerunGroup - This shows Group01 in the table ''' | where job_name="*$group$*" OR job_name=rerunGroup | head 2 | dedup build_number | stats sum(test_summary.passes) as Pass | fillnull value="Test Inprogress..." Pass
I have an Add-On which has defined a new data input. Via the UI, I can easily create new instances of the same input .. but I want to create them programmatically, via Python. The inputs.conf.spec fi... See more...
I have an Add-On which has defined a new data input. Via the UI, I can easily create new instances of the same input .. but I want to create them programmatically, via Python. The inputs.conf.spec file for the app contains: [strava_api://<name>] interval = index = access_code = start_time = reindex_data = These are the values I am prompted for when using the UI manually ... but, how do I achieve the same result via Python (using python3)? I have the Python SDK installed .. but I just dont know where to start and would appreciate some inspiration from the Community. Thanks in advance for any comments/tips/solutions (For those with a keen eye, the Add-On in question is Strava for Splunk )
i have installed the deployment server where configured required inputs.conf and outputs.conf for ingesting logs from UF to my indexer  on the deployment-app of deployment server, and configured the ... See more...
i have installed the deployment server where configured required inputs.conf and outputs.conf for ingesting logs from UF to my indexer  on the deployment-app of deployment server, and configured the UF forwarder to the deployment server, i found the new host details in the deployment server under the forward management tab. And pointed to appropriate classes and apps from the forward-management  configuration. Still I'm not able to get the logs on the indexer. when i see the error on the Splunkd log of uf. Find the below error   11-20-2023 17:25:40.602 +0400 ERROR X509Verify - X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) failed validation; error=7, reason="certificate signature failure" 11-20-2023 17:25:40.602 +0400 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read server certificate B', alert_description='decrypt error'. 11-20-2023 17:25:40.602 +0400 WARN HttpPubSubConnection - Unable to parse message from PubSubSvr: 11-20-2023 17:25:40.602 +0400 INFO HttpPubSubConnection - Could not obtain connection, will retry after=86.429 seconds. 11-20-2023 17:25:40.778 +0400 INFO WatchedFile - Will begin reading at offset=2295058 for file='/var/log/audit/audit.log'. 11-20-2023 17:25:46.129 +0400 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='/var/log/anaconda/ks-script-lk6ot_yw.log'. 11-20-2023 17:25:46.130 +0400 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='/var/log/anaconda/ks-script-wo9l091q.log'. 11-20-2023 17:25:49.856 +0400 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 11-20-2023 17:26:01.857 +0400 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 11-20-2023 17:26:07.910 +0400 INFO ScheduledViewsReaper - Scheduled views reaper run complete. Reaped count=0 scheduled views 11-20-2023 17:26:07.914 +0400 INFO TcpOutputProc - Removing quarantine from idx=192.168.1.5:9997 11-20-2023 17:26:07.914 +0400 INFO TcpOutputProc - Removing quarantine from idx=192.168.1.6:9997 11-20-2023 17:26:07.921 +0400 ERROR TcpOutputFd - Read error. Connection reset by peer 11-20-2023 17:26:07.928 +0400 ERROR TcpOutputFd - Read error. Connection reset by peer
Hey @ITWhisperer  Thanks for the detailed and helpful response. This looks promising. I will try this out and will update the thread with further findings.
The version of Splunk I'm currently using is 9.1.1
hello Splunk team, As picture, I found UI duplication problem in selecting data type module. I tested different browsers and found this problem in all of them. I suspect it is caused by the wrong de... See more...
hello Splunk team, As picture, I found UI duplication problem in selecting data type module. I tested different browsers and found this problem in all of them. I suspect it is caused by the wrong design of Splunk UI, can you help solve this problem? Thank you!  
But then I dont get the individual Totals if I do that along with the message.
@merrelr You can make that object visible in all apps by exporting it to system    
I have a singlevalue trellis view showing the status of items up (Green) and down (Red). When the status is down (Red), i would like to get the trellis view to flash or blink. I found below htm... See more...
I have a singlevalue trellis view showing the status of items up (Green) and down (Red). When the status is down (Red), i would like to get the trellis view to flash or blink. I found below html code for blinking effect <html> <style> @keyframes blink { 100%, 0% { opacity: 0.6; } 60% { opacity: 0.9; } } #singlevalue rect { animation: blink 0.8s infinite; } </style> </html> However, it will make all singlevalue are blinking. I am using rangemap to make the background of singlevalue in Red/Green color. How can I specify the trellis will blink in Red color only? Thanks a lot.
I'm trying to make SOC Use cases clear, concise, and easy to find later. It is possible to make a threat detection use case based on MITRE, but I guess SOC is not the only threat detection. There are... See more...
I'm trying to make SOC Use cases clear, concise, and easy to find later. It is possible to make a threat detection use case based on MITRE, but I guess SOC is not the only threat detection. There are many other requirements such as compliance and business use cases. What approach should be more effective and right? Here are my questions. Use Case Development: - Best practices for effective SOC use cases and recommended frameworks? Documentation and Knowledge Management: - Strategies/tools for organizing SOC use cases for searchability? Continuous Improvement: - Methods for improving and updating SOC use cases over time? - Can you share examples of how penetration testing results have influenced the development of SOC use cases? Risk Assessment Integration: - How do you align SOC use cases with risk levels identified in risk assessments? - Are there specific metrics or indicators from risk assessments that should be incorporated into SOC use cases? - What best practices do you suggest for regularly reviewing and updating SOC use cases based on changes in risk assessments?
Figured it out. | makeresults | eval dest="url1,url2,url3", dest_port=8443, dest = split (dest,",") | mvexpand dest | `sslcert(dest, dest_port)` | lookup sslcert_lookup dest, dest_port OUTPUT ss... See more...
Figured it out. | makeresults | eval dest="url1,url2,url3", dest_port=8443, dest = split (dest,",") | mvexpand dest | `sslcert(dest, dest_port)` | lookup sslcert_lookup dest, dest_port OUTPUT ssl_subject_common_name ssl_subject_alt_name ssl_end_time ssl_validity_window | eval ssl_subject_alt_name = split(ssl_subject_alt_name,"|") | eval days_left = round(ssl_validity_window/86400) | table ssl_subject_common_name ssl_subject_alt_name days_left | sort days_left
Hi @AL3Z  Both locations are right.  the first one ... the \etc\system\local\inputs.conf is the default inputs.conf file path. Generally if you dont have different apps, you can use this file alone... See more...
Hi @AL3Z  Both locations are right.  the first one ... the \etc\system\local\inputs.conf is the default inputs.conf file path. Generally if you dont have different apps, you can use this file alone and specify all files for monitoring, whitelisting and blacklistings.  if you have lots of things for monitoring, it will be better to group them as "apps" and then have their config files in their particular folders. so troubleshooting will become easy.    then understanding the file precedence http://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/Wheretofindtheconfigurationfiles