All Topics

Top

All Topics

I have two look up and both have a field called DNS. I need to figure out which values in those fields match. I have tried the below per a different thread, which in theory is what I'm looking for, b... See more...
I have two look up and both have a field called DNS. I need to figure out which values in those fields match. I have tried the below per a different thread, which in theory is what I'm looking for, but I kept getting an error (Error in 'from' command: Invalid dataset specifier) at the join command on line 3.  Similar issue but solution didn't work    | inputlookup Test1.csv | fields UserName, Count | rename Count as Count1 | join type=inner UserName [| inputlookup Test2.csv | fields UserName, Count | rename Count as count2]    
I have 2 queries: One is an OFF event, and one is an ON event for a cluster of machines for customers. I want to calculate approximate total hours of OFF event within a time range. Also, the assumpt... See more...
I have 2 queries: One is an OFF event, and one is an ON event for a cluster of machines for customers. I want to calculate approximate total hours of OFF event within a time range. Also, the assumption is if the ON event is seen for the first time within the time picker range that implies the machine was already OFF before this. So, for example: For 20-day period over month of November OFF events: CustomerID Time/Date ABC 11/2/2022 GHI 11/3/2022 GHI 11/9/2022 MNO 11/10/2022 PQR 11/14/2022 JKL 11/16/2022 ON events: CustomerID Time/Date DEF 11/5/2022 GHI 11/7/2022 PQR 11/7/2022 JKL 11/12/2022 MNO 11/15/2022 JKL 11/18/2022   So, if today date is November 20 and time picker range is set for last 20 days (making time range from 11/1/2022 to 11/20/2022): OFF time for ABC is 11/20 – 11/2 = 18 days OFF time for DEF is 11/5- 11/1 = 4 days since the machines is assumed to be turned off before the ON event OFF time for GHI is (11/7-11/3) + (11/20-11/9) = 4 + 11 + 15 days OFF time for JKL is (11/12-11/1) + (11/18-11/16) = 11+2 = 13 days OFF time for MNO is (15-10) = 5 days OFF time for PQR is (11/7-11/1) +(11/20-11/14) = 6+6 =12 days So total off time(approximate) for 6 customers over a range of 20 days is 18+4+15+13+5+12=67 days The query that I came up with is just customers data sorted over customer and decreasing time: index=xaci sourcetype="xaxd" "*Powered off operation for*" OR "Powered On operation for" | rex "[cC]ustomer:(?<customerID>\w"|  sort customerID, - _time CustomerID _time ABC 11/2/2022 DEF 11/5/2022 GHI 11/9/2022 GHI 11/7/2022 GHI 11/3/2022 JKL 11/18/2022 JKL 11/16/2022 JKL 11/12/2022 MNO 11/15/2022 MNO 11/10/2022 PQR 11/14/2022 PQR 11/7/2022 Any help is highly appreciated since I'm new to Splunk
Hi All, One of our team just asked me about pulling logs in from an Azure blob container. I read his doc about using a key or SAS token but surely you can use a read only service account with blob... See more...
Hi All, One of our team just asked me about pulling logs in from an Azure blob container. I read his doc about using a key or SAS token but surely you can use a read only service account with blob data reader role for just that container to pull this down right? I'm pretty sure we have multiple apps that do it that way now without giving them a key or SAS token to the whole storage account . Let me know and thanks.
Hello, Is it possible to create indexed fields on log files uploaded from my PC? The log file is tens of thousands of records. I need to index two of the fields in order to run tstats against them.... See more...
Hello, Is it possible to create indexed fields on log files uploaded from my PC? The log file is tens of thousands of records. I need to index two of the fields in order to run tstats against them. Running a normal search and extracting these two fields (and others) at search time is not efficient. If these two fields were indexed I could run searches with tstats much, much faster. Thanks in advance and God bless, Genesius
The PingDirectory App documentation states to use log2metrics_json as the sourcetype. The onprem Splunk v9.0 does not have this as a pretrained sourcetype. Need to get ping metrics logs from dsstats.... See more...
The PingDirectory App documentation states to use log2metrics_json as the sourcetype. The onprem Splunk v9.0 does not have this as a pretrained sourcetype. Need to get ping metrics logs from dsstats.json and I have created a metrics index and receive the following error when using the log2metrics_json sourcetype. "....The metric event is not properly structured, source=/ping/latest/logs/dsstats.json, sourcetype=log2metrics_json, host=pdir0001, index=pdmetrics. Metric event data without a metric name and properly formated numerical values are invalid and cannot be indexed. Ensure the input metric data is not malformed....." It appears that Splunk Cloud has the log2metrics_json sourcetype. If so, what are the configs needed for props and transforms needed to duplicate this?
While processing an AS request for target service krbtgt, the account XXX-G-Dashboard-Dev did not have a suitable key for generating a Kerberos ticket (the missing key has an ID of 1). The requested ... See more...
While processing an AS request for target service krbtgt, the account XXX-G-Dashboard-Dev did not have a suitable key for generating a Kerberos ticket (the missing key has an ID of 1). The requested etypes : 18 17 3. The accounts available etypes : 23. Changing or resetting the password of XXX-G-Dashboard-Dev will generate a proper key. What is the regex to extract words in red? Thanks.
In configuring Rules for Splunk Ingest Actions I have a sourcetype configured for numerous "Filter with Regular expression" stanzas that is properly dropping events ... However, I'd like to have t... See more...
In configuring Rules for Splunk Ingest Actions I have a sourcetype configured for numerous "Filter with Regular expression" stanzas that is properly dropping events ... However, I'd like to have the same sourcetype drop messages where host=foo-*  ... I might be able to use the eval expression to do that, but I'm not sure how to construct it in a format acceptable to the UI, and functionally appropriate. eval true = if(match(host,"^foo-"),true,null()) I'm sure that's wrong, but there really are no examples that I've been able to find other than "true()" 
Hello, i'm new to Splunk and i need some advices. I've created a lookup named my_color_lookup, with 2 column : color,danger red,high yellow,medium green,low Then my base search is : s... See more...
Hello, i'm new to Splunk and i need some advices. I've created a lookup named my_color_lookup, with 2 column : color,danger red,high yellow,medium green,low Then my base search is : sourctype=foo AND customer_id=520. This search returns me a quantity of event and has several fields. One of these fields is src_light. I want to create a new field "risk_level" in my event if src_light match with one color inside my lookup, i want my search to - add a value low or medium or high in the new field risk_level,  - leave the field risk_level if ther's no matching. Thanks for your help and suggestions
I keep getting this error when I push Json from python to Splunk 
Search: index=xxxxx host_ip IN(16.121.12.123 OR 16.121.12.124 OR 16.121.12.126 OR 16.121.12.128) sourcetype=xxxxxxx |search "activity_status"=done |eval results=if((like(response, "200"), "s... See more...
Search: index=xxxxx host_ip IN(16.121.12.123 OR 16.121.12.124 OR 16.121.12.126 OR 16.121.12.128) sourcetype=xxxxxxx |search "activity_status"=done |eval results=if((like(response, "200"), "success", "failure") |stats count(eval(result="success")) AS Overall_Success, count(response) as total |eval Success_per=(Overall_Success/total)*100.0 |stats avg(Success_per) as SuccessPer how can i write the condition like when my SuccessPer is <40  i need to see message like "The application is less thank 40 %, please check." If the SuccessPer is >40 then SuccessPer value should display. How can i do this???
Hi Has anyone seen this before, I'm using DB connect to pull data in from a MySQL db, and this is the results shown in the data lab but the raw events have the training zero's replaced with ... See more...
Hi Has anyone seen this before, I'm using DB connect to pull data in from a MySQL db, and this is the results shown in the data lab but the raw events have the training zero's replaced with E7, any idea how I stop this? Thanks in advance Andy  
Hi  I want to create log level field for info logs.It should show the status information .  For example the field name status, the field should show the, okay count, information count, etc count.... See more...
Hi  I want to create log level field for info logs.It should show the status information .  For example the field name status, the field should show the, okay count, information count, etc count. Please find the logs below  Status: INFORMATION: Description: Beginning GDP Fransaction Script: 01-22-2023-01-13-04-PM Status: INFORMATION: Description: txt file already exists Status: INFORMATION: Description: csv file already exists Status: OK: Description: C:\GDPFransactionScript\Inputs \GDPTestFile.csv copy to USB successful Status: OK: Description: C:\GDPTransactionScript\Inputs \GDPTestFile.txt copy to USB successful Status: ERROR: Description: http POST failed: Status: ERROR: Description: https POST failed: Status: INFORMATION: Description: End of GDP Transaction Script: 01-22-2023-01-13-04-PM
Hello I have the following search which produces  statistics(746) in Splunk:   index=my_index sourcetype=my_st id=100 host!=10.* earliest=-1d@d | stats values(repot) as repot dc(repot) as repos... See more...
Hello I have the following search which produces  statistics(746) in Splunk:   index=my_index sourcetype=my_st id=100 host!=10.* earliest=-1d@d | stats values(repot) as repot dc(repot) as repost_count values(ip) as ip_address dc(ip) as ip_count by host |table host ip_count ip_address repot_count repot      I am then using a lookup file to filter out unwanted hosts from the above search (which produces statitics(676) in Splunk.   | search [ |inputlookup my_host_list |table host ip_address ] |dedup host |table host ip_count ip_address repot_count repot   How would I determine the host names of the 70 missing hosts from the my_host_list lookup?
  How to update splunk protocols for Splunk servers and ports in an Splunk Enterprise environment. Servers      Ports A                    8089 B                   8089   Only following protoco... See more...
  How to update splunk protocols for Splunk servers and ports in an Splunk Enterprise environment. Servers      Ports A                    8089 B                   8089   Only following protocols to be updated TLSv1.3: 0x13,0x01 TLS_ 0x13,0x02 TLS_ 0x13,0x03 TLS_ TLSv1.2: 0xC0,0x2B ECDHE- 0xC0,0x2F ECDHE-
Hi, i want to test the Dynatrace logs to Splunk via the Dynatrace add-on and App from Splunkbase. Created Inputs in Add-on: Created configurations:   _raw 2023-03-10 1... See more...
Hi, i want to test the Dynatrace logs to Splunk via the Dynatrace add-on and App from Splunkbase. Created Inputs in Add-on: Created configurations:   _raw 2023-03-10 12:30:55,868 ERROR pid=7140 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last):   File "C:\Program Files\Splunk\etc\apps\Splunk_TA_Dynatrace\bin\splunk_ta_dynatrace\aob_py3\modinput_wrapper\base_modinput.py", line 128, in stream_events     self.collect_events(ew)   File "C:\Program Files\Splunk\etc\apps\Splunk_TA_Dynatrace\bin\dynatrace_problem.py", line 72, in collect_events     input_module.collect_events(self, ew)   File "C:\Program Files\Splunk\etc\apps\Splunk_TA_Dynatrace\bin\input_module_dynatrace_problem.py", line 71, in collect_events     entityDict = x["result"]["problems"] KeyError: 'result' 2023-03-10 12:30:55,866 ERROR pid=7140 tid=MainThread file=base_modinput.py:log_error:309 | {"error":{"code":403,"message":"Token is missing required scope. Use one of: DataExport (Access problem and event feed, metrics, and topology), Davis (Dynatrace module integration - Davis)"}} 2023-03-10 12:30:55,647 INFO pid=7140 tid=MainThread file=setup_util.py:log_info:117 | Proxy is not enabled! 2023-03-10 12:30:55,646 INFO pid=7140 tid=MainThread file=setup_util.py:log_info:117 | Log level is not set, use default INFO 2023-03-10 12:30:44,268 ERROR pid=9852 tid=MainThread file=base_modinput.py:log_error:309 | {"error":{"code":403,"message":"Token is missing required scope. Use one of: DataExport (Access problem and event feed, metrics, and topology), Davis (Dynatrace module integration - Davis)"}} 2023-03-10 12:30:44,021 INFO pid=9852 tid=MainThread file=setup_util.py:log_info:117 | Proxy is not enabled! 2023-03-10 12:30:44,021 INFO pid=9852 tid=MainThread file=setup_util.py:log_info:117 | Log level is not set, use default INFO 2023-03-10 12:29:29,773 ERROR pid=10888 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last):   File "C:\Program Files\Splunk\etc\apps\Splunk_TA_Dynatrace\bin\splunk_ta_dynatrace\aob_py3\modinput_wrapper\base_modinput.py", line 128, in stream_events     self.collect_events(ew)   File "C:\Program Files\Splunk\etc\apps\Splunk_TA_Dynatrace\bin\dynatrace_problem.py", line 72, in collect_events     input_module.collect_events(self, ew)   File "C:\Program Files\Splunk\etc\apps\Splunk_TA_Dynatrace\bin\input_module_dynatrace_problem.py", line 71, in collect_events     entityDict = x["result"]["problems"] KeyError: 'result'
I'm not able to figure out how to use submitOnDashboardLoad in the normal xml dashboard. Where shall I put it. I've tried putting it in the form, search, fieldset, and as option name but its not work... See more...
I'm not able to figure out how to use submitOnDashboardLoad in the normal xml dashboard. Where shall I put it. I've tried putting it in the form, search, fieldset, and as option name but its not working.
Hi @ All Splunkynators how to sample incoming (HEC) data? I want get statistical data /events to save license volume, drop eg 9 of 10 of incoming events... I look forward to your suggestions ... See more...
Hi @ All Splunkynators how to sample incoming (HEC) data? I want get statistical data /events to save license volume, drop eg 9 of 10 of incoming events... I look forward to your suggestions Gegards - Markus
I Need to take a CSV file as input with a list of UF hostnames and check if they are reporting to splunk deployment server in a dashbaord
Hello People, I am trying to run below splunk query,   base search | rename msg.message as "message", msg.customer as "customer" | eval Total_Count = 1, Total_Success = if(where isnull( msg.errorC... See more...
Hello People, I am trying to run below splunk query,   base search | rename msg.message as "message", msg.customer as "customer" | eval Total_Count = 1, Total_Success = if(where isnull( msg.errorCode),"1","0"), Total_Error = if(where isnotnull( msg.errorCode),"1","0") | fields Total_Count,Total_Success,Total_Error,message,customer | stats sum(Total_Count) as Total, sum(Total_Success) as Success, sum(Total_Error) as Error | eval successRate = ((Success/Total)*100)."%" | stats Total, Success, successRate by customer   and I am getting below error   Error in 'eval' command: The expression is malformed. Expected IN.   Can anyone please let me know what am I doing wrong here? Thanks !!!