All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

The app "Splunk App for Fraud Analytics" introduced that we "can download and install test data from here. Please consider that using test data can use up to 7 GB and will take 10-30 minutes for the ... See more...
The app "Splunk App for Fraud Analytics" introduced that we "can download and install test data from here. Please consider that using test data can use up to 7 GB and will take 10-30 minutes for the test data to initialize correctly". But I did not find any test data attached.
Hi all, I successfully forward data from Windows using the command msiexec.exe /i splunkuniversalforwarder_x86.msi RECEIVING_INDEXER="indexer1:9997" WINEVENTLOG_SEC_ENABLE=1 WINEVENTLOG_SYS_ENABLE=... See more...
Hi all, I successfully forward data from Windows using the command msiexec.exe /i splunkuniversalforwarder_x86.msi RECEIVING_INDEXER="indexer1:9997" WINEVENTLOG_SEC_ENABLE=1 WINEVENTLOG_SYS_ENABLE=1 AGREETOLICENSE=Yes /quiet from Install a Windows universal forwarder . The same for Linux with the command ./splunk add monitor /var/log from Configure the universal forwarder using configuration files . Both works fine and I can see the hosts in the Data Summary as visible in the following figure. Data Summary If I instead set up the input in the local "inputs.conf" file after basic installation like [perfmon://LocalPhysicalDisk] interval = 10 object = PhysicalDisk counters = Disk Bytes/sec; % Disk Read Time; % Disk Write Time; % Disk Time instances = * disabled = 0 index = winfwtestinger for example and assign a specific index, I can see that data is ingested if I search for the specific index but they will not appear in the Data Summary. I would be very happy about any suggestion what I am doing wrong here.   Best regards
Hello When I run a search i have the message "could not load lookup" with different lookup name For example : Could not load lookup=LOOKUP-Kerberosfailurecode Could not load lookup=LOOKUP-Kerbero... See more...
Hello When I run a search i have the message "could not load lookup" with different lookup name For example : Could not load lookup=LOOKUP-Kerberosfailurecode Could not load lookup=LOOKUP-Kerberosresultcode Could not load lookup=LOOKUP-syscall I had a look in the lookup definition menu and I can see that some lookup are referenced to my splunk apps even if i dont use these lookups in my apps! But i can change the name of the apps Is it possible to change it? Moreover, some lookup like "syscall" doesnt exists in my lookup definition menu so how to solve this issue please?  
Lets say I have a table of two fields. and some of the cells are empty. How do I find the number of empty cells using "addcoltotals"
Hello friends! I get JSON like this {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} and so on ... { [-]    key: 29.09.2023    value_cnt: 2736    value_sum: 51150570.59 } аnd r... See more...
Hello friends! I get JSON like this {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} and so on ... { [-]    key: 29.09.2023    value_cnt: 2736    value_sum: 51150570.59 } аnd row_source like this 10/4/23 1:23:03.000 PM   {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} Show syntax highlighted host = app-damu.hcb.kz source = /opt/splunkforwarder/etc/apps/XXX/pays_7d.sh sourcetype = damu_pays_7d   And i want to get table like this: days sum cnt 27.09.2023 35476232.82 2338 29.09.2023 51150570.59 2736   so i have to get latest events and put it to table. Please help
Hi all, I just wanted to ask if there is the possibility to pass username and password when starting splunk forwarder (9.1.1) on a linux system for the first time. Due to Windows Universal Forwarde... See more...
Hi all, I just wanted to ask if there is the possibility to pass username and password when starting splunk forwarder (9.1.1) on a linux system for the first time. Due to Windows Universal Forwarder Installation this is possible during installation via:     msiexec.exe /i splunkforwarder_x64.msi AGREETOLICENSE=yes SPLUNKUSERNAME=SplunkAdmin SPLUNKPASSWORD=Ch@ng3d! /quiet       Is there a similar command for Linux Forwarder?   Best regards
I have configured 5 domain controllers to send log to Splunk by installing UF. I have DC2 and DC5 reporting to Winevenlog as it is configured but I am missing the other 3 DCs. All logging to _inter... See more...
I have configured 5 domain controllers to send log to Splunk by installing UF. I have DC2 and DC5 reporting to Winevenlog as it is configured but I am missing the other 3 DCs. All logging to _internal what should I do to correct the logging.
Hi, I have this command:  | mstats avg("value1) prestats=true WHERE "index"="my_index" span=10s BY host | timechart avg("value1") span=10s useother=false BY host WHERE max in top5 and I would lik... See more...
Hi, I have this command:  | mstats avg("value1) prestats=true WHERE "index"="my_index" span=10s BY host | timechart avg("value1") span=10s useother=false BY host WHERE max in top5 and I would like to count the host and trigger when I have less then 3 hosts.  I tired something like that: ```|stats dc(host) as c_host | where c_host > 3,``` but its not working as usual .   any idea? thanks!  
Hi, I'm trying to plot graph for previous 2 weekday average. Below is the query used index="xyz" sourcetype="abc" app_name="123" or "456" earliest=-15d@d latest=now | rex field=msg "\"[^\"]*\"\s(?<... See more...
Hi, I'm trying to plot graph for previous 2 weekday average. Below is the query used index="xyz" sourcetype="abc" app_name="123" or "456" earliest=-15d@d latest=now | rex field=msg "\"[^\"]*\"\s(?<status>\d+)" | eval HTTP_STATUS_CODE=case(like(status, "2__"),"2xx") | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | where current_day == log_day | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%d") | stats count by hour day HTTP_STATUS_CODE | chart avg(count) as average by hour HTTP_STATUS_CODE  This plots grpah for complete 24hrs.  I wanted to know if I can limit the graph to current timestamp. Say now system time is 11AM. I want graph to be plotted only upto 11AM and not entire 24hrs. Can it be done ? Please advice
Log like. message: [22/09/23 10:31:47:935 GMT] [ThreadPoolExecutor-thread-15759] INFO failed.", suspenseAccountNumber="941548131", suspenseAccountBSB="083021", timeCreate as OTHER BUSINESS REASON r... See more...
Log like. message: [22/09/23 10:31:47:935 GMT] [ThreadPoolExecutor-thread-15759] INFO failed.", suspenseAccountNumber="941548131", suspenseAccountBSB="083021", timeCreate as OTHER BUSINESS REASON returned by CBIS.", debtor RoutingType="BBAN", debtor Routing Id="013013", creditor RoutingType="BBA 6899-422f-8162-6911da94e619", transactionTraceIdentification-1311b8a21-6d6c-422b-8 22T10:31:42.8152_00306", instrId="null", interactionId="null", interactionOriginators tx_uid-ANZBAU3L_A_TST01_ClrSttlmve01_2023-09-22T10:31:42.8152 00306, txId-ANZBAU3L priority-NORM, addressingType=noAlias, flow-N5XSuspense.receive]     How extract the transactionTraceIdentification filed    I tried already rex field= message "transactionTraceIdentification=\"(?<transactionTraceIdentification>.*?)\","   Not extraxted the vaule
Hello, So I have a below dashboard panel which is populated with lookup.  Name     Organization    Count Bob            splunk                 2 Matt           google                15 smith  ... See more...
Hello, So I have a below dashboard panel which is populated with lookup.  Name     Organization    Count Bob            splunk                 2 Matt           google                15 smith          facebook            9  What I'm looking for is.  1. If I click the Bob, it has to open a new search tab with the query "| inputlookup mydetails.csv | search Name=Bob " 2. If I click the Splunk, it has to open a new url with "www.splunk.com" For all the values respectively.  How do I achieve this within one ?
Anyone have an idea on the below issue? | inputlookup test the lookup table file and definition both are available, both of the permissions are set to read(everyone)- set to app level, but when i a... See more...
Anyone have an idea on the below issue? | inputlookup test the lookup table file and definition both are available, both of the permissions are set to read(everyone)- set to app level, but when i am trying to inputlookup i am seeing the error. Initially the lookup definition is set to read everyone and lookup file is set to read admin, so i changed it to everyone this afternoon and tried the below search but i am still getting below error | inputlookup test The lookup table 'test' requires a .csv or KV store lookup definition. The lookup table 'test' is invalid. Btw this is on Production Search head in a clustered environment
Hello, I was trying to use REGEX command within props/transforms conf files to extraction fields, but field extraction is not working. Two sample events and my props/transforms conf files are given ... See more...
Hello, I was trying to use REGEX command within props/transforms conf files to extraction fields, but field extraction is not working. Two sample events and my props/transforms conf files are given below. Any recommendations will be highly appreciated. Thank you so much. props.conf [mysourcetype] SHOULD_LINEMERGE=false LINE_BREAKER = ([\r\n]+) TIME_PREFIX=^ TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%3N MAX_TIMESTAMP_LOOKAHEAD=24 TRUNCATE = 9999 REPORT-fieldEtraction = fieldEtraction   transforms.conf [fieldEtraction] REGEX = \{\"UserID\":\"(?P<UserID>\w+)\","UserType":\"(?P<UserType>\w+)\","System":\"(?P<System>\w+)\","UAT":\"(?P<UAT>.*)\","EventType":\"(?P<EventType>.*)\","EventID":"(?P<EventID>.*)\","Subject":"(?P<Subject>.*)\","EventStatus":"(?P<EventStatus>.*)\","TimeStamp":\"(?P<TimeStamp>.*)\","Device":"(?P<Device>.*)\","MsG":"(?P<Message>.*)\"}   Samples Events  2023-10-03T18:56:31.099Z OTESTN097MA4513020 TEST[20248] {"UserID":"8901A","UserType":"EMP","System":"TEST","UAT":"UTA-True","EventType":"TEST","EventID":"Lookup","Subject":"A516617222","EventStatus":"00","TimeStamp":"2023-10-03T18:56:31.099Z","Device":" OTESTN097MA4513020","Msg":"lookup ok"} 2023-10-03T18:56:32.086Z OTESTN097MA4513020 TEST[20248] {"UserID":"8901A","UserType":"EMP","System":"TEST","UAT":"UTA-True","EventType":"TEST","EventID":"Lookup","Subject":"A516617222","EventStatus":"00","TimeStamp":"2023-10-03T18:56:32.086Z","Device":" OTESTN097MA4513020","Msg":"lookup ok"}
Hello, I am seeing the below error in the internal logs, I am on Splunk On premise clustered environment. 10-03-2023 23:48:50.697 +0000 ERROR SearchParser [110001 TcpChannelThread] - The search s... See more...
Hello, I am seeing the below error in the internal logs, I am on Splunk On premise clustered environment. 10-03-2023 23:48:50.697 +0000 ERROR SearchParser [110001 TcpChannelThread] - The search specifies a macro 'get_tenable_sourcetype' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macro has not been shared with this application. Click Settings, Advanced search, Search Macros to view macro information. How do i need to get rid of this error from our internal logs. I have checked under Macros and all configurations and I dont see this macro. But inside the TA-tenable/local/macros.conf i see only  [get_tenable_index] definition = (index=abc) iseval = 0 Please help me with your thoughts. Thanks
Hello, I am seeing the below error in the internal logs. The lookup table XYZ does not exist or not available I have checked in Lookup table files, Lookup definitions, Automatic Lookups but didn't... See more...
Hello, I am seeing the below error in the internal logs. The lookup table XYZ does not exist or not available I have checked in Lookup table files, Lookup definitions, Automatic Lookups but didn't find this lookup.How do i need to get rid of this error, any suggestions please.   Thanks
Hello guys. This is my first post here to ask for help with extracting fields from a JSON object. Below is an example of the record: {"pod":"fmd9p","time":"2023-10-03T21:49:39.31255352Z", "source":... See more...
Hello guys. This is my first post here to ask for help with extracting fields from a JSON object. Below is an example of the record: {"pod":"fmd9p","time":"2023-10-03T21:49:39.31255352Z", "source":"/var/log/containers/fmd9p_default.log","container_id":"1ae53e1be","log": "I1003 14:49:39.312453 test_main.cc:149] trace_id=\"8aeb0\" event=\"Worker.Finish\" program_run_sec=25.1377 status=\"OK\""} How can I extract trace_id, event, program_run_sec, and status from the log section automatically by setting up a sourcetype? Is it doable? Thanks for any help and advise
I have bunch of alerts, I received email alert, but I did not receive auto cut incident to service now How to troubleshoot this issue?????
Hello, I'm working with a Splunk cluster which has two slave peers and I need to disable an index on the Cluster Master using the REST API. I've tried the usual endpoint (/servicesNS/nobody/{app}/co... See more...
Hello, I'm working with a Splunk cluster which has two slave peers and I need to disable an index on the Cluster Master using the REST API. I've tried the usual endpoint (/servicesNS/nobody/{app}/configs/conf-indexes/{index}) as this doc says (https://docs.splunk.com/Documentation/Splunk/8.0.0/RESTREF/RESTconf#configs.2Fconf-.7Bfile.7D.2F.7Bs... ), but it doesn't seem to work on the Cluster Master. Can someone please provide me with the specific REST API endpoint I should use to disable an index on the Cluster Master? I have read the documentation https://docs.splunk.com/Documentation/Splunk/8.0.0/RESTREF/RESTcluster but there is no reference to what I need. Thank you in advance for your assistance
I am trying to host Prometheus metrics on a Splunk app such that the metrics are available at `.../my_app/v1/metrics` endpoint. I am able to create a handler of type PersistentServerConnectionAppli... See more...
I am trying to host Prometheus metrics on a Splunk app such that the metrics are available at `.../my_app/v1/metrics` endpoint. I am able to create a handler of type PersistentServerConnectionApplication and have it return Prometheus metrics. The response status, however, code = `500` and content = `Unexpected character while looking for value: '#'` Prometheus metrics do not confirm to any of the supported `output_modes` (atom | csv | json | json_cols | json_rows | raw | xml) so I get the same error irrespective of the output mode chosen. Is there a way to bypass the output check? Is there any other alternative to host a non-confirming-format output via a Splunk REST API?
Good afternoon, Background: I found a configuration issue in one of our firewalls which I'm trying to remediate where an admin created a very broad access rule that has permitted traffic over a wid... See more...
Good afternoon, Background: I found a configuration issue in one of our firewalls which I'm trying to remediate where an admin created a very broad access rule that has permitted traffic over a wide array of TCP/UDP ports. I started working to identify valid traffic which has used the rule, but a co-worker mentioned an easy win would be creating an ACL to block any ports which had not already been allowed through this very promiscuous rule. My problem is I know how to use the data model to identify TCP/UDP traffic which has been logged egressing through the rule, but how could I modify the search provided below so that I can get a result that displays which ports have NOT been logged? (Also bonus points if you can help me view numbers returned as ranges rather than individual numbers aka "5000-42000") Here is my current search:   | tstats ,values(All_Traffic.dest_port) AS dest_port values(All_Traffic.dest_ip) AS dest_ip dc(All_Traffic.dest_ip) AS num_dest_ip dc(All_Traffic.dest_port) AS num_dest_port FROM datamodel=Network_Traffic WHERE index="firewall" AND sourcetype="traffic" AND fw_rule="horrible_rule" BY All_Traffic.dest_port | rename All_Traffic.* AS *   Thank you in advance for any help that you may be able to provide!