All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Case Scenario: Dashboard A is clicked, thus sending a token whose value is hostname ($hostnameToken$) to Dashboard B. Dashboard B with the following query has received $hostnameToken$ , then u... See more...
Case Scenario: Dashboard A is clicked, thus sending a token whose value is hostname ($hostnameToken$) to Dashboard B. Dashboard B with the following query has received $hostnameToken$ , then used on | search host_name , when search | search query returns “Results not Found”.         index=S score>=7.0         | lookup A.csv IP Address as ip OUTPUTNEW Squad         | lookup B.csv IP as ip OUTPUTNEW PIC, Email         | lookup C.csv ip as ip OUTPUTNEW host_name            IF        (true)                 | search host_name="$hostnameToken$"        THEN DO THIS:                  | stats values(plugin) as Plugin values(solution) as Solution values(PIC) as pic values(Email) as email                    values(Squad) as squad by ip         ELSE   (false)                   | eval hostToken="$hostnameToken$"                   | lookup CortexHostIp2.csv host_name as hostToken OUTPUTNEW ip                   | search ip=ip               THEN DO THIS:                        | stats values(plugin) as Plugin values(solution) as Solution values(PIC) as pic values(Email)                                   as email values(Squad) as squad by ip   The next search is carried out by converting the hostname token value to IP via eval and lookup. If both ELSE conditions are not met (value is False), then the search stops.   Question: How to implement conditional statements into the above query? What is the right query to use?
1日1回のスケジュールで、全件洗い替えするサマリーインデックスを作成しています。 レポートに対し、「サマリーインデックスの編集」で設定し、「スケジュールの編集」でスケジュール実行されるように設定しています。   savedsearches.conf [si_summary-detail] action.summary_index = 1 action.summary_index.... See more...
1日1回のスケジュールで、全件洗い替えするサマリーインデックスを作成しています。 レポートに対し、「サマリーインデックスの編集」で設定し、「スケジュールの編集」でスケジュール実行されるように設定しています。   savedsearches.conf [si_summary-detail] action.summary_index = 1 action.summary_index._name = summary-detail action.summary_index._type = event cron_schedule = 30 0 * * * enableSched = 1 search = ... SPL ...   indexes.conf [summary-detail] frozenTimePeriodInSecs = 43200 quarantinePastSecs = 473385600 maxDataSize = auto_high_volume   スケジュール実行した場合、次のコマンドが追加されていました。実行した結果、サマリーインデックスに入るデータが不足しています。 試しに、サーチクエリーを作成し、実行してみましたが、スケジュール実行した場合と同じでした。 2015-03-04T02:17:07Zから2022-06-03T08:57:42Zまでの14,099,234件しかサマリーインデックスに入りません。 | summaryindex spool=t uselb=t addtime=t index="summary-detail" file="summary-detail.stash_new" name="si_summary-detail" marker=""   「file="summary-detail.stash_new"」を除去し、サーチクエリーを実行した場合は、正常な結果が得られました。 2015-03-04T02:17:07Zから2022-08-25T04:01:36Zまでの19,972,598件がサマリーインデックスに入ります。 | summaryindex spool=t uselb=t addtime=t index="summary-detail" name="si_summary-detail" marker=""   fileオプションの指定を外すと上手く行くため、[stash_new]スタンザの設定が影響しているのでしょうか? [stash_new]スタンザは、デフォルト設定のままで、変更を加えていません。   サマリーインデックスに全件入れるため、確認すべきポイントがあれば教えてください。
I have a csv file that is created by a shell script on a Linux server and runs every minute.  I am running a forwarder on the server to send the data to splunk.  The csv file has a header line contai... See more...
I have a csv file that is created by a shell script on a Linux server and runs every minute.  I am running a forwarder on the server to send the data to splunk.  The csv file has a header line containing the field names.  Some of the data has units along with the number.  What I mean is if the number is a percent, the number might look like "10%".  If a number is measured in microseconds, the number might look like "10us".  Is that the best way to ingest the data?  Should I remove the units?  Does Splunk see "10us" as a number?   Thanks!
Hello,  I have splunk starting up with systemd, and running as user splunk.    I went to run the performance tasks on my indexers.  Each of them failed.  under triggered collectors, it reads the co... See more...
Hello,  I have splunk starting up with systemd, and running as user splunk.    I went to run the performance tasks on my indexers.  Each of them failed.  under triggered collectors, it reads the collector stack trace failed. I logged into the system in question, and looked at the splunk_rapid_diag.log file.   tools_collector ERROR 139880958523200 - Error occurred for collector tcpdump while running `/usr/sbin/tcpdump -i any -w /tmp/tmpbkxib485/tcpdump_All_All.pcap` Process finished with code=1 how do I run diagnostic tools without root access?   I expect this would affect any collectors using strace as well.   --jason      
Hey Splunkers,   I am working on a search but I have encountered a road block in my search. I am attempting to change a UTC time zone to CST within the search. I was able to change the EPOCH time... See more...
Hey Splunkers,   I am working on a search but I have encountered a road block in my search. I am attempting to change a UTC time zone to CST within the search. I was able to change the EPOCH times to CST but I am struggling to locate any documentation on how I can convert the UTC time to match the same as my CST results. I need to change my time to match the other time zones.         2022-08-31T21:04:52Z       needs to be converted to the same format as       08/31/2022 16:21:16        
Hello, I've been tasked to optimize a former colleague's saved searches and found that the query had a lot of rex command going at the same field and decided to compact into one REGEX As such, i'... See more...
Hello, I've been tasked to optimize a former colleague's saved searches and found that the query had a lot of rex command going at the same field and decided to compact into one REGEX As such, i've applied the following REGEX: From Regex101, i've had the query with a whopping 6.5k steps which is a bit too much, and i've been trying to reduce it as much as i can but i've lack knowledge in that department in order to optimize further the query. One of the things that i want to keep only are the capture groups but the rest i want to ignore altogether. Is there a way of doing that and reducing the steps? https://regex101.com/r/qDy1Lr/4
I have a sourcetype which contains raw SNMP data which looks like this (port definitions for network switches): timestamp=1661975375 IF-MIB::ifAlias.1 = "ServerA Port 1"  timestamp=1661975375 IF-... See more...
I have a sourcetype which contains raw SNMP data which looks like this (port definitions for network switches): timestamp=1661975375 IF-MIB::ifAlias.1 = "ServerA Port 1"  timestamp=1661975375 IF-MIB::ifDescr.1 = "Unit: 1 Slot: 0 Port: 1 Gigabit - Level"  timestamp=1661975375 IF-MIB::ifAlias.53 = "ServerA Port 2"  timestamp=1661975375 IF-MIB::ifDescr.53 = "Unit: 2 Slot: 0 Port: 1 Gigabit - Level"  timestamp=1661971775 IF-MIB::ifAlias.626 = "ServerA LAG"  timestamp=1661971775 IF-MIB::ifDescr.626 = " Link Aggregate 1"  I want to generate fields when this data is ingested into Splunk.  I do not want to do this during search.  (So probably using transforms.conf and regex).  I think there’s ways to do this with Python as well, but I don’t have the experience or time to go down that path. The result of the above 6 rows of example data would have the following fields for each line respectively: Alias=1, Description=”ServerA Port 1” Alias=1, Unit=1, Port=1 Alias=53, Description=”ServerA Port 2” Alias=53, Unit=2, Port=1 Alias=626, Description=”ServerA LAG” Alias=626, Lag=1 I can build field extractions or a manual regex to do one of these lines individually, but not all together.  I also wonder if pure regex is the way to go here as it seems like it would take many "steps" with this many parameters. Would really appreciate the help from someone with the knowledge and experience of using transforms to get this done.  Thank you in advance for solutions or recommendations.
Picking up my first project for SOAR detections. Asking if anyone knows groups or sites that helped them when they were new. Thanks in advance!
Hi Team, From the below raw JSON string in Splunk, I am trying to display only correlationId column in a table, can someone help with a query on how to achieve this?   Also wanted to know if it... See more...
Hi Team, From the below raw JSON string in Splunk, I am trying to display only correlationId column in a table, can someone help with a query on how to achieve this?   Also wanted to know if it can be achieved from a regular expression.     index= test1, sourcetype=abc { "eventName": “test”, "sourceType”: “ats”, "detail": { "field": “abctest-1”, "trackInformation”: { "correlationId": “12345”, "components": [ { "publisherTimeLog”: "2022-08-31T13:19:18.726", “MetaData”: “cmd”, "executionTimeInMscs”: “2”5, "receiverTimeLog”: "2022-08-31T13:19:18.725" } ] }, "value": “imdb”, "timestamp": 1455677 }, } Output: ______ correlationID ——————— 12345    
I have two message threads, each thread consists of ten messages. I need to request to display these two chains in one. The new thread must consist of ten different messages: five messages from one ... See more...
I have two message threads, each thread consists of ten messages. I need to request to display these two chains in one. The new thread must consist of ten different messages: five messages from one system, five messages from another (backup) system. Messages from the system use the same SrcMsgId value. Each system has a unique SrcMsgId within the same chain. The message chain from the backup system enters the splunk immediately after the messages from the main system. Messages from the standby system also have a Mainsys_srcMsgId value - this value is identical to the main system's SrcMsgId value. Tell me how can I display a chain of all ten messages? Perhaps first messages from the first system (main), then from the second (backup) with the display of the time of arrival at the server.  Specifically, we want to see all ten messages one after the other, in the order in which they arrived at the server. Five messages from the primary, for example: ("srcMsgId": "rwfsdfsfqwe121432gsgsfgd71") and five from the backup: ("srcMsgId": "rwfsdfsfqwe121432gsgsfgd72"). The problem is that messages from other systems also come to the server, all messages are mixed (chaotically), which is why we want to organize all messages from one system and its relative in the search. Messages from the backup system are associated with the main system only by this parameter: "Mainsys_srcMsgId" - using this key, we understand that messages come from the backup system (secondary to the main one). Examples of messages from the primary and secondary system: Main system: { "event": "Sourcetype test please", "sourcetype": "testsystem-2", "host": "some-host-123", "fields": { "messageId": "ED280816-E404-444A-A2D9-FFD2D171F32", "srcMsgId": "rwfsdfsfqwe121432gsgsfgd71", "Mainsys_srcMsgId": "", "baseSystemId": "abc1", "routeInstanceId": "abc2", "routepointID": "abc3", "eventTime": "1985-04-12T23:20:50Z", "messageType": "abc4", .......................................................................................... Message from backup system: { "event": "Sourcetype test please", "sourcetype": "testsystem-2", "host": "some-host-123", "fields": { "messageId": "ED280816-E404-444A-A2D9-FFD2D171F23", "srcMsgId": "rwfsdfsfqwe121432gsgsfgd72", "Mainsys_srcMsgId": "rwfsdfsfqwe121432gsgsfgd71", "baseSystemId": "abc1", "routeInstanceId": "abc2", "routepointID": "abc3", "eventTime": "1985-04-12T23:20:50Z", "messageType": "abc4", "GISGMPRequestID": "PS000BA780816-E404-444A-A2D9-FFD2D1712345", "GISGMPResponseID": "PS000BA780816-E404-444B-A2D9-FFD2D1712345", "resultcode": "abc7", "resultdesc": "abc8" } } When we want to combine in a query only five messages from one chain, related: "srcMsgId". We make the following request: index="bl_logging" sourcetype="testsystem-2" | транзакция maxpause=5m srcMsgId Mainsys_srcMsgId messageId | таблица _time srcMsgId Mainsys_srcMsgId messageId продолжительность eventcount | сортировать srcMsgId_time | streamstats current=f window=1 значения (_time) as prevTime по теме | eval timeDiff=_time-prevTime | delta _time как timediff  
Hello I have a question, I would like to know if there is any way to incorporate my dashboard on my website. and it is able to update by itself.   Thanks a lot!
My team uses playbooks to automate email alerts in Phantom. Some playbooks have been randomly sending emails with the replacement character (a black diamond with a white question mark). Other times t... See more...
My team uses playbooks to automate email alerts in Phantom. Some playbooks have been randomly sending emails with the replacement character (a black diamond with a white question mark). Other times the emails are working fine and have normal text. Has anyone had this issue in the past? If so, how did you resolve it?  I was thinking of updating the Splunk SMTP App in Phantom. Thanks for the help!
Hello, I have an app on our cloud SH named A and i wanted to rename it to B. Which config change is required to change an app name on Splunk cloud? I guess i need to open a case to splunk support s... See more...
Hello, I have an app on our cloud SH named A and i wanted to rename it to B. Which config change is required to change an app name on Splunk cloud? I guess i need to open a case to splunk support since we doesn't have a backend access but i am curious to know whether in app.conf this app rename should be changed or any other conf file? please advise.     Thanks,  
The page doesn't have a download link that I can find and nothing in the documentation. Has it been removed?   https://splunkbase.splunk.com/app/6250/#/details
One of our alerts, CSIRT - Threat_Activity_Detection,  came in on 8/31 but did not auto assign the Incident Type  that I created (csirt - threat_activity_detection) and therefore the Response Templat... See more...
One of our alerts, CSIRT - Threat_Activity_Detection,  came in on 8/31 but did not auto assign the Incident Type  that I created (csirt - threat_activity_detection) and therefore the Response Template I created (CSIRT – Threat Activity Detection) for that Incident did not get assigned.  Is this a bug or did I not configure this properly?
Hello One of my company's firewall ingest more logs every tuesday to splunk which makes us go over the 10G limit per day for our subscription. This only happens every tuesday. Does any one knows wh... See more...
Hello One of my company's firewall ingest more logs every tuesday to splunk which makes us go over the 10G limit per day for our subscription. This only happens every tuesday. Does any one knows what's the problem is, and how to bring the daily ingestion to uniformity? Thanks for the help E
This is the code  import requests import datetime now = datetime.datetime.now() # print(now) data = {'ticket_id':'CH-12345','response_code':200,'service':'Ec2','problem_type':'server_dow... See more...
This is the code  import requests import datetime now = datetime.datetime.now() # print(now) data = {'ticket_id':'CH-12345','response_code':200,'service':'Ec2','problem_type':'server_down','time':now} headers = { 'Content-Type': 'application/json' } response = requests.post('https://localhost:8089/servicesNS/nobody/TA-cherwell-data-pull/storage/collections/data/cherwell_data' ,headers=headers ,data=data, verify=False , auth=('admin' , 'changeme')) print(response.text)  This is the error I am getting  <msg type="ERROR">JSON in the request is invalid. ( JSON parse error at offset 1 of file "ticket_id=CH-12345&amp;response_code=200&amp;service=Ec2&amp;problem_type=server_down&amp;time=2022-08-31+20%3A28%3A53.237962": Unexpected character while parsing literal token: 'i' )</msg> Please let me know if you need any more help
Hello I have a little problem with Splunk! I have a table that basically contains data in the following way number value 1 A 1 B 2 C ... See more...
Hello I have a little problem with Splunk! I have a table that basically contains data in the following way number value 1 A 1 B 2 C 3 D 3 E   I would like to have a table like number value 1 A B 2 C 3 D E As you can see, I would like to have the data in the same cells.   If you have a solution 
Hello, what' the best way to compare averages between two non-adjacent time periods. I have bunch of api call events with response_time field. I need a dashboard, where I can see the performance dif... See more...
Hello, what' the best way to compare averages between two non-adjacent time periods. I have bunch of api call events with response_time field. I need a dashboard, where I can see the performance difference between last month and current month. If I try the following, somehow the averages are blank in dashboard, but click on the enlarging glass of the tile, I get a a search query with values. What am I missing? Is there an even more efficient and faster way?     <form> <label>API Performance</label> <search id="multisearch"> <query>| multisearch [ search earliest=$periodBeforeTok.earliest$ latest=$periodBeforeTok.latest$ index=A my_search_query response_time=* | eval response_time_before=response_time | fields api_request response_time_before | eval timeSlot="1" ] [search earliest=$periodAfterTok.earliest$ latest=$periodAfterTok.latest$ index=A my_search_query | eval response_time_after=response_time | fields api_request response_time_after | eval timeSlot="2" ] </query> </search> <fieldset submitButton="true" autoRun="false"> <input type="time" token="periodBeforeTok"> <label>Before Time Period</label> <default> <earliest>1658707200</earliest> <latest>1659312000</latest> </default> </input> <input type="time" token="periodAfterTok"> <label>After Time Period</label> <default> <earliest>1659312000</earliest> <latest>1659916800</latest> </default> </input> </fieldset> <row> <panel> <table> <title>Query Stats</title> <search base="multisearch"> <query>| stats count as totalCount, count(eval(timeSlot=1)) as totalCountBefore, count(eval(timeSlot=2)) as totalCountAfter, avg(response_time_before) as response_time_before, avg(response_time_after) as response_time_after by api_request | eval response_time_before=round(response_time_before/1000,3) | eval response_time_after=round(response_time_after/1000,3) | eval delta_response_time=response_time_after-response_time_before | table api_request totalCountBefore totalCountAfter response_time_before response_time_after delta_response_time</query> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>    
I have a Universal Forwarder accepting syslog traffic from multiple sources.  The UF forwards up to indexers in Splunk Cloud. My question is two-fold:   If I need an Add-On like for VMware ESXI Logs... See more...
I have a Universal Forwarder accepting syslog traffic from multiple sources.  The UF forwards up to indexers in Splunk Cloud. My question is two-fold:   If I need an Add-On like for VMware ESXI Logs. Do I install that on the UF or request installation in Splunk Cloud? And if the latter, how does my UF know that I can now use any new sourcetypes?  I've read through the installation notes on a few Add-Ons and have not seen mention of how new sourcetypes are used outside of the server or instance the add-on is directly isntalled.   Thanks!