All Topics

Top

All Topics

Hello, I've been tasked to optimize a former colleague's saved searches and found that the query had a lot of rex command going at the same field and decided to compact into one REGEX As such, i'... See more...
Hello, I've been tasked to optimize a former colleague's saved searches and found that the query had a lot of rex command going at the same field and decided to compact into one REGEX As such, i've applied the following REGEX: From Regex101, i've had the query with a whopping 6.5k steps which is a bit too much, and i've been trying to reduce it as much as i can but i've lack knowledge in that department in order to optimize further the query. One of the things that i want to keep only are the capture groups but the rest i want to ignore altogether. Is there a way of doing that and reducing the steps? https://regex101.com/r/qDy1Lr/4
I have a sourcetype which contains raw SNMP data which looks like this (port definitions for network switches): timestamp=1661975375 IF-MIB::ifAlias.1 = "ServerA Port 1"  timestamp=1661975375 IF-... See more...
I have a sourcetype which contains raw SNMP data which looks like this (port definitions for network switches): timestamp=1661975375 IF-MIB::ifAlias.1 = "ServerA Port 1"  timestamp=1661975375 IF-MIB::ifDescr.1 = "Unit: 1 Slot: 0 Port: 1 Gigabit - Level"  timestamp=1661975375 IF-MIB::ifAlias.53 = "ServerA Port 2"  timestamp=1661975375 IF-MIB::ifDescr.53 = "Unit: 2 Slot: 0 Port: 1 Gigabit - Level"  timestamp=1661971775 IF-MIB::ifAlias.626 = "ServerA LAG"  timestamp=1661971775 IF-MIB::ifDescr.626 = " Link Aggregate 1"  I want to generate fields when this data is ingested into Splunk.  I do not want to do this during search.  (So probably using transforms.conf and regex).  I think there’s ways to do this with Python as well, but I don’t have the experience or time to go down that path. The result of the above 6 rows of example data would have the following fields for each line respectively: Alias=1, Description=”ServerA Port 1” Alias=1, Unit=1, Port=1 Alias=53, Description=”ServerA Port 2” Alias=53, Unit=2, Port=1 Alias=626, Description=”ServerA LAG” Alias=626, Lag=1 I can build field extractions or a manual regex to do one of these lines individually, but not all together.  I also wonder if pure regex is the way to go here as it seems like it would take many "steps" with this many parameters. Would really appreciate the help from someone with the knowledge and experience of using transforms to get this done.  Thank you in advance for solutions or recommendations.
Picking up my first project for SOAR detections. Asking if anyone knows groups or sites that helped them when they were new. Thanks in advance!
Hi Team, From the below raw JSON string in Splunk, I am trying to display only correlationId column in a table, can someone help with a query on how to achieve this?   Also wanted to know if it... See more...
Hi Team, From the below raw JSON string in Splunk, I am trying to display only correlationId column in a table, can someone help with a query on how to achieve this?   Also wanted to know if it can be achieved from a regular expression.     index= test1, sourcetype=abc { "eventName": “test”, "sourceType”: “ats”, "detail": { "field": “abctest-1”, "trackInformation”: { "correlationId": “12345”, "components": [ { "publisherTimeLog”: "2022-08-31T13:19:18.726", “MetaData”: “cmd”, "executionTimeInMscs”: “2”5, "receiverTimeLog”: "2022-08-31T13:19:18.725" } ] }, "value": “imdb”, "timestamp": 1455677 }, } Output: ______ correlationID ——————— 12345    
I have two message threads, each thread consists of ten messages. I need to request to display these two chains in one. The new thread must consist of ten different messages: five messages from one ... See more...
I have two message threads, each thread consists of ten messages. I need to request to display these two chains in one. The new thread must consist of ten different messages: five messages from one system, five messages from another (backup) system. Messages from the system use the same SrcMsgId value. Each system has a unique SrcMsgId within the same chain. The message chain from the backup system enters the splunk immediately after the messages from the main system. Messages from the standby system also have a Mainsys_srcMsgId value - this value is identical to the main system's SrcMsgId value. Tell me how can I display a chain of all ten messages? Perhaps first messages from the first system (main), then from the second (backup) with the display of the time of arrival at the server.  Specifically, we want to see all ten messages one after the other, in the order in which they arrived at the server. Five messages from the primary, for example: ("srcMsgId": "rwfsdfsfqwe121432gsgsfgd71") and five from the backup: ("srcMsgId": "rwfsdfsfqwe121432gsgsfgd72"). The problem is that messages from other systems also come to the server, all messages are mixed (chaotically), which is why we want to organize all messages from one system and its relative in the search. Messages from the backup system are associated with the main system only by this parameter: "Mainsys_srcMsgId" - using this key, we understand that messages come from the backup system (secondary to the main one). Examples of messages from the primary and secondary system: Main system: { "event": "Sourcetype test please", "sourcetype": "testsystem-2", "host": "some-host-123", "fields": { "messageId": "ED280816-E404-444A-A2D9-FFD2D171F32", "srcMsgId": "rwfsdfsfqwe121432gsgsfgd71", "Mainsys_srcMsgId": "", "baseSystemId": "abc1", "routeInstanceId": "abc2", "routepointID": "abc3", "eventTime": "1985-04-12T23:20:50Z", "messageType": "abc4", .......................................................................................... Message from backup system: { "event": "Sourcetype test please", "sourcetype": "testsystem-2", "host": "some-host-123", "fields": { "messageId": "ED280816-E404-444A-A2D9-FFD2D171F23", "srcMsgId": "rwfsdfsfqwe121432gsgsfgd72", "Mainsys_srcMsgId": "rwfsdfsfqwe121432gsgsfgd71", "baseSystemId": "abc1", "routeInstanceId": "abc2", "routepointID": "abc3", "eventTime": "1985-04-12T23:20:50Z", "messageType": "abc4", "GISGMPRequestID": "PS000BA780816-E404-444A-A2D9-FFD2D1712345", "GISGMPResponseID": "PS000BA780816-E404-444B-A2D9-FFD2D1712345", "resultcode": "abc7", "resultdesc": "abc8" } } When we want to combine in a query only five messages from one chain, related: "srcMsgId". We make the following request: index="bl_logging" sourcetype="testsystem-2" | транзакция maxpause=5m srcMsgId Mainsys_srcMsgId messageId | таблица _time srcMsgId Mainsys_srcMsgId messageId продолжительность eventcount | сортировать srcMsgId_time | streamstats current=f window=1 значения (_time) as prevTime по теме | eval timeDiff=_time-prevTime | delta _time как timediff  
Back to TOC | To Essentials What else should you know? Back to TOC | To Essentials   Essentials ADVISORY | Customers are advised to check backward compatibility in the Agent and Control... See more...
Back to TOC | To Essentials What else should you know? Back to TOC | To Essentials   Essentials ADVISORY | Customers are advised to check backward compatibility in the Agent and Controller Compatibility documentation. Download Essential Components (Agents, Enterprise Console, Controller (on-prem), Events Service, EUM Components) Download Additional Components (SDKs, Plugins, etc.) How do I get started upgrading my AppDynamics components for any release? Product Announcements, Alerts, and Hot Fixes Open Source Extensions License Entitlements and Restrictions  
Hello I have a question, I would like to know if there is any way to incorporate my dashboard on my website. and it is able to update by itself.   Thanks a lot!
Join us for our newest version of Boss of Operations and Observability (BOO) competition taking place October 18, 2022.   Location: On-site [Munich] & Virtual [See Registration link for Slack ... See more...
Join us for our newest version of Boss of Operations and Observability (BOO) competition taking place October 18, 2022.   Location: On-site [Munich] & Virtual [See Registration link for Slack info]   Kickoff 2:00PM [GMT +01:00 Central Europe]  Competition Start 2:30PM [GMT +01:00 Central Europe]  Competition End 5:30PM [GMT +01:00 Central Europe]  REGISTER NOW !  Questions? BOO@splunk.com
My team uses playbooks to automate email alerts in Phantom. Some playbooks have been randomly sending emails with the replacement character (a black diamond with a white question mark). Other times t... See more...
My team uses playbooks to automate email alerts in Phantom. Some playbooks have been randomly sending emails with the replacement character (a black diamond with a white question mark). Other times the emails are working fine and have normal text. Has anyone had this issue in the past? If so, how did you resolve it?  I was thinking of updating the Splunk SMTP App in Phantom. Thanks for the help!
Hello, I have an app on our cloud SH named A and i wanted to rename it to B. Which config change is required to change an app name on Splunk cloud? I guess i need to open a case to splunk support s... See more...
Hello, I have an app on our cloud SH named A and i wanted to rename it to B. Which config change is required to change an app name on Splunk cloud? I guess i need to open a case to splunk support since we doesn't have a backend access but i am curious to know whether in app.conf this app rename should be changed or any other conf file? please advise.     Thanks,  
The page doesn't have a download link that I can find and nothing in the documentation. Has it been removed?   https://splunkbase.splunk.com/app/6250/#/details
One of our alerts, CSIRT - Threat_Activity_Detection,  came in on 8/31 but did not auto assign the Incident Type  that I created (csirt - threat_activity_detection) and therefore the Response Templat... See more...
One of our alerts, CSIRT - Threat_Activity_Detection,  came in on 8/31 but did not auto assign the Incident Type  that I created (csirt - threat_activity_detection) and therefore the Response Template I created (CSIRT – Threat Activity Detection) for that Incident did not get assigned.  Is this a bug or did I not configure this properly?
Hello One of my company's firewall ingest more logs every tuesday to splunk which makes us go over the 10G limit per day for our subscription. This only happens every tuesday. Does any one knows wh... See more...
Hello One of my company's firewall ingest more logs every tuesday to splunk which makes us go over the 10G limit per day for our subscription. This only happens every tuesday. Does any one knows what's the problem is, and how to bring the daily ingestion to uniformity? Thanks for the help E
This is the code  import requests import datetime now = datetime.datetime.now() # print(now) data = {'ticket_id':'CH-12345','response_code':200,'service':'Ec2','problem_type':'server_dow... See more...
This is the code  import requests import datetime now = datetime.datetime.now() # print(now) data = {'ticket_id':'CH-12345','response_code':200,'service':'Ec2','problem_type':'server_down','time':now} headers = { 'Content-Type': 'application/json' } response = requests.post('https://localhost:8089/servicesNS/nobody/TA-cherwell-data-pull/storage/collections/data/cherwell_data' ,headers=headers ,data=data, verify=False , auth=('admin' , 'changeme')) print(response.text)  This is the error I am getting  <msg type="ERROR">JSON in the request is invalid. ( JSON parse error at offset 1 of file "ticket_id=CH-12345&amp;response_code=200&amp;service=Ec2&amp;problem_type=server_down&amp;time=2022-08-31+20%3A28%3A53.237962": Unexpected character while parsing literal token: 'i' )</msg> Please let me know if you need any more help
Hello I have a little problem with Splunk! I have a table that basically contains data in the following way number value 1 A 1 B 2 C ... See more...
Hello I have a little problem with Splunk! I have a table that basically contains data in the following way number value 1 A 1 B 2 C 3 D 3 E   I would like to have a table like number value 1 A B 2 C 3 D E As you can see, I would like to have the data in the same cells.   If you have a solution 
Hello, what' the best way to compare averages between two non-adjacent time periods. I have bunch of api call events with response_time field. I need a dashboard, where I can see the performance dif... See more...
Hello, what' the best way to compare averages between two non-adjacent time periods. I have bunch of api call events with response_time field. I need a dashboard, where I can see the performance difference between last month and current month. If I try the following, somehow the averages are blank in dashboard, but click on the enlarging glass of the tile, I get a a search query with values. What am I missing? Is there an even more efficient and faster way?     <form> <label>API Performance</label> <search id="multisearch"> <query>| multisearch [ search earliest=$periodBeforeTok.earliest$ latest=$periodBeforeTok.latest$ index=A my_search_query response_time=* | eval response_time_before=response_time | fields api_request response_time_before | eval timeSlot="1" ] [search earliest=$periodAfterTok.earliest$ latest=$periodAfterTok.latest$ index=A my_search_query | eval response_time_after=response_time | fields api_request response_time_after | eval timeSlot="2" ] </query> </search> <fieldset submitButton="true" autoRun="false"> <input type="time" token="periodBeforeTok"> <label>Before Time Period</label> <default> <earliest>1658707200</earliest> <latest>1659312000</latest> </default> </input> <input type="time" token="periodAfterTok"> <label>After Time Period</label> <default> <earliest>1659312000</earliest> <latest>1659916800</latest> </default> </input> </fieldset> <row> <panel> <table> <title>Query Stats</title> <search base="multisearch"> <query>| stats count as totalCount, count(eval(timeSlot=1)) as totalCountBefore, count(eval(timeSlot=2)) as totalCountAfter, avg(response_time_before) as response_time_before, avg(response_time_after) as response_time_after by api_request | eval response_time_before=round(response_time_before/1000,3) | eval response_time_after=round(response_time_after/1000,3) | eval delta_response_time=response_time_after-response_time_before | table api_request totalCountBefore totalCountAfter response_time_before response_time_after delta_response_time</query> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>    
I have a Universal Forwarder accepting syslog traffic from multiple sources.  The UF forwards up to indexers in Splunk Cloud. My question is two-fold:   If I need an Add-On like for VMware ESXI Logs... See more...
I have a Universal Forwarder accepting syslog traffic from multiple sources.  The UF forwards up to indexers in Splunk Cloud. My question is two-fold:   If I need an Add-On like for VMware ESXI Logs. Do I install that on the UF or request installation in Splunk Cloud? And if the latter, how does my UF know that I can now use any new sourcetypes?  I've read through the installation notes on a few Add-Ons and have not seen mention of how new sourcetypes are used outside of the server or instance the add-on is directly isntalled.   Thanks!
Hello, we had standalone search head and indexer in a pre-production environment then I created new clustered environment with 2 sh and 2 idx. I want to add those old non-clustered search head an... See more...
Hello, we had standalone search head and indexer in a pre-production environment then I created new clustered environment with 2 sh and 2 idx. I want to add those old non-clustered search head and indexer, could you let me know the right commands/procedures to add them to the existing cluster? Do I need to remove all Splunk instances and reinstall from scratch? I understand old non-clustered data may be removed but this is not a problem as mostly frozen. Thanks for your help.
Is there a more elegant way to do this? New to using rex & I can’t seem to strip out the multiple parentheses and slashes from a field without using replace.  (I don't have control over the data, I k... See more...
Is there a more elegant way to do this? New to using rex & I can’t seem to strip out the multiple parentheses and slashes from a field without using replace.  (I don't have control over the data, I know it is better to strip it out first.) These do work but in some cases there are more parentheses and slashes - is there a way to strip all of them out at once, or do I need to make repeating phrases? | rex mode=sed field=Field_A "s/\(\)/ /g" | rex mode=sed field=Field_B "s/\(\)/ /g" | rex mode=sed field=Field_B "s/\// /g"
After we upgraded to v9.0.1 we get a warning when following dashboard-generated links pointing "outside" splunk: Redirecting away from Splunk You are being redirected away from Splunk to ... See more...
After we upgraded to v9.0.1 we get a warning when following dashboard-generated links pointing "outside" splunk: Redirecting away from Splunk You are being redirected away from Splunk to the following URL: https://[some non-splunk web-server] Note that tokens embedded in a URL could contain sensitive information. It comes with a "Don't show again" option, but it indeed shows again every time. Is there somewhere to disable this warning? Thanks