All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Below is my dashboard XMLcode. The behavior I want to implement is to have the user's selection of values ​​in the table's columns automatically enter the multi-selection input. I don't know what to ... See more...
Below is my dashboard XMLcode. The behavior I want to implement is to have the user's selection of values ​​in the table's columns automatically enter the multi-selection input. I don't know what to do. I want to make this...  Does anybody know what can I to do..??? Pleasee Help..me..... : ((((     <form version="1.1" theme="dark"> <label>Sales DashBoard</label> <fieldset submitButton="true" autoRun="false"> <input type="time" token="globalTime" searchWhenChanged="true"> <label>Select Time Range</label> <default> <earliest>0</earliest> <latest></latest> </default> </input> <input type="text" token="country" searchWhenChanged="true"> <label>select Country</label> <default>*</default> </input> <input type="multiselect" token="client_token"> <label>client_token</label> <choice value="*">ALL</choice> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>clientip="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>clientip</fieldForLabel> <fieldForValue>clientip</fieldForValue> <search> <query>index=main | stats count by clientip</query> </search> <default>*</default> </input> <input type="multiselect" token="field1" searchWhenChanged="true"> <label>field1 $clicked_value$</label> <choice value="*">all</choice> <choice value="clicked_value">choice</choice> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>products</fieldForLabel> <fieldForValue>products</fieldForValue> <search> <query>| index=main productName=$clicked_value$ | stats count by productName</query> </search> <delimiter> </delimiter> </input> <input type="text" token="input_02" searchWhenChanged="true"> <label></label> <default>$clicked_value$</default> <initialValue>$clicked_value$</initialValue> </input> </fieldset> <row> <panel> <title>test demo</title> <table> <title>셀트리온 과제임 $clicked_value$</title> <search> <query>index=main sourcetype="access*" action=purchase $client_token$ | stats values(productName) as products by clientip</query> <earliest>$globalTime.earliest$</earliest> <latest>$globalTime.latest$</latest> </search> <option name="drilldown">cell</option> <format type="color" field="clientips"> <colorPalette type="minMidMax" maxColor="#118832" minColor="#FFFFFF"></colorPalette> <scale type="minMidMax"></scale> </format> <format type="number" field="clientips"></format> <drilldown> <set token="clicked_value">$click.value2$</set> </drilldown> </table> </panel> </row> <row> <panel> <title>Actual Purchase Rate</title> <single> <title>transition from shopping cart to actual purchase</title> <search> <query>index=main sourcetype="access_combined_wcookie" status=200 action IN(addtocart, purchase) | iplocation clientip | search Country="$country$" | eval action_type=if(action="addtocart", "cart", if(action="purchase", "purchase", "other")) | stats count(eval(action_type="cart")) as cart_count count(eval(action_type="purchase")) as purchase_count | eval rate=round(purchase_count*100/cart_count, 2) | table rate</query> <earliest>$globalTime.earliest$</earliest> <latest>$globalTime.latest$</latest> </search> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="numberPrecision">0.00</option> <option name="rangeColors">["0xd41f1f","0xd94e17","0xf8be34","0x1182f3","0x118832"]</option> <option name="rangeValues">[60,70,85,90]</option> <option name="refresh.display">progressbar</option> <option name="useColors">1</option> </single> </panel>..</form>
guys i have obtained routing through syslog method and i faced a problem the logs are coming when i run Tcpdump in the third-party system but i can't see them in the other SIEM    how can i solve t... See more...
guys i have obtained routing through syslog method and i faced a problem the logs are coming when i run Tcpdump in the third-party system but i can't see them in the other SIEM    how can i solve this issue ....   hellp
hello all, if I have 2 apps deployed on  Splunk forwarder agent with outputs.conf file  first one(all_UF_outputs) to send logs to indexers' ips and the other(all_splk_outputs )to send logs to index... See more...
hello all, if I have 2 apps deployed on  Splunk forwarder agent with outputs.conf file  first one(all_UF_outputs) to send logs to indexers' ips and the other(all_splk_outputs )to send logs to indexers over the hostname. how I can confirm which one has the highest precedence?          
Hello Community,   i have forwarded the data for trend micro to another third-party SIEM (Qradar) using HF those the configuration i did    # props.conf [source::udp:1411] TRANSFORMS-send_tmao_r... See more...
Hello Community,   i have forwarded the data for trend micro to another third-party SIEM (Qradar) using HF those the configuration i did    # props.conf [source::udp:1411] TRANSFORMS-send_tmao_route = send_tmao_to_remote_siem # transforms.conf [send_tmao_to_remote_siem] REGEX = . SOURCE_KEY = _MetaData:Index DEST_KEY = _SYSLOG_ROUTING FORMAT = remote_siem # outputs.conf [syslog:remote_siem] server = remotesiem:1234 sendCookedData = false  i have recieved the data by using tcpdump and packets are coming from HF to the third-party system   but there are not appear in the SIEM why is that any help ...?
this is part of one table hostname |  monitor | ip |  other fields... aaa |v | .... aaa |x | ... bbb | v | ... how can change the value of 'x' to 'v'  in the second row (when there is two diff... See more...
this is part of one table hostname |  monitor | ip |  other fields... aaa |v | .... aaa |x | ... bbb | v | ... how can change the value of 'x' to 'v'  in the second row (when there is two diffrent value save it as V) i should save the ip because it can be different, the other fields also can be different the main problem it that I use join to this table by hostname which relies on the value of montior and something it got X when the real value is V maybe  can I use join if there is V at monitor? hope you undersatnd. 
Hi Team, We are using modular input to ingest the logs into splunk, we have checkpoint file, but we see duplicate logs are ingested into splunk. How to eliminate duplicates? application from which ... See more...
Hi Team, We are using modular input to ingest the logs into splunk, we have checkpoint file, but we see duplicate logs are ingested into splunk. How to eliminate duplicates? application from which the logs are ingested - Tyk analytics
hello all,   we have multi-site cluster master and  we do not want a cold mount in the DR indexers. is it applicable?   If  indexer  hits hot/warm retention and not found the cold path will dele... See more...
hello all,   we have multi-site cluster master and  we do not want a cold mount in the DR indexers. is it applicable?   If  indexer  hits hot/warm retention and not found the cold path will delete the data  ?
  We have been running our indexer cluster as a multisite cluster with 3 indexers in our main site for the past year.with the below configuration: site_replication_factor = origin:2,total:2 site_s... See more...
  We have been running our indexer cluster as a multisite cluster with 3 indexers in our main site for the past year.with the below configuration: site_replication_factor = origin:2,total:2 site_search_factor = origin:1,total:1 now we have decided to establish a disaster recovery site with an additional 3 indexers. The expected configuration for the new DR site will be as follows: site_replication_factor = origin:2, total:3 site_search_factor = origin:1, total:2 I would like to address the question about how replication will work once the DR indexer is configured? will the replication process start syncing all logs in the hot, warm and cold buckets or will start real-time hot  logs only??
i have used this approach to forward logs from specific index to third-party system in my case Qradar   so i need to do the same forwarding specific index using syslog not TCP cuz it's takes time (... See more...
i have used this approach to forward logs from specific index to third-party system in my case Qradar   so i need to do the same forwarding specific index using syslog not TCP cuz it's takes time ( i did tcpdump to figure that)   this approach i follow  # props.conf [default] TRANSFORMS-send_foo_to_remote_siem # transforms.conf [send_foo_to_remote_siem] REGEX = foo SOURCE_KEY = _MetaData:Index DEST_KEY = _TCP_ROUTING FORMAT = remote_siem # outputs.conf [tcpout:remote_siem] server = remotesiem:1234 sendCookedData = false thanks
I have 2 queries which is having sub search for input look up in each. Query 1 This query outputs the timechart for for CPU1. It will count each processes listed in the CPU1 field of the test.csv. ... See more...
I have 2 queries which is having sub search for input look up in each. Query 1 This query outputs the timechart for for CPU1. It will count each processes listed in the CPU1 field of the test.csv.  index=custom | eval SEP=split(_raw,"|"), eval CPU1=trim(mvindex(SEP,1)) | bin _time span=1m | stats count(CPU1) as CPU1_COUNT by _time CPU1 | search  [ |  input lookup test.csv  | fields CPU1 | fillnull value = 0 |  format ]   Query 2 This query outputs the timechart for for CPU2. It will count each processes listed in the CPU2 field of the test.csv.  index=custom | eval SEP=split(_raw,"|"), eval CPU2=trim(mvindex(SEP,1)) | bin _time span=1m | stats count(CPU2) as CPU2_COUNT by _time CPU2 | search  [ |  input lookup test.csv  | fields CPU2 | fillnull value = 0 |  format ]   test.csv (sample) CPU1 CPU2 CPU3 process_a process_b process_c process_d process_e process_f process_g process_i process_h     What I want is to display the CPU1 and CPU2 time chart in one chart .  Any advice on that will be a great help. Thanks
Hi community,   I'm wondering if it's possible to forward specific index in splunk to other third-party systems or SIEM such as Qradar or any other SIEM  i have read something about HF that it's p... See more...
Hi community,   I'm wondering if it's possible to forward specific index in splunk to other third-party systems or SIEM such as Qradar or any other SIEM  i have read something about HF that it's possible but i don't understand it fully     if Yes, please give me approach to do this .. thanks
Hi guys! how to proceed to create alerts on inactive and unstable entities .
Hi all, I want to find the difference between two values (values.in65To127OctetFrames). My data is like below. {"name":"ethernet_counter","timestamp":1717838243109,"tags":{"interface_name":"Ethern... See more...
Hi all, I want to find the difference between two values (values.in65To127OctetFrames). My data is like below. {"name":"ethernet_counter","timestamp":1717838243109,"tags":{"interface_name":"Ethernet48","source":"sri-devgrp-prert00","subscription-name":"ethernet_counter"},"values":{"in65To127OctetFrames":2922198453881}} {"name":"ethernet_counter","timestamp":1717837943109,"tags":{"interface_name":"Ethernet48","source":"sri-devgrp-prert00","subscription-name":"ethernet_counter"},"values":{"in65To127OctetFrames":2922102453899}} {"name":"ethernet_counter","timestamp":1717837643345,"tags":{"interface_name":"Ethernet48","source":"sri-devgrp-prert00","subscription-name":"ethernet_counter"},"values":{"in65To127OctetFrames":2922006507704}} I tried the following SPL, but I received "Error in 'EvalCommand': Type checking failed. '-' only takes numbers.". index=gnmi name=ethernet_counter tags.source=sri-devgrp-prert00 earliest=06/08/2024:08:00:00 latest=06/08/2024:09:22:00 | sort _time | streamstats current=f last(values.in65To127OctetFrames) as previous_value by tags.interface_name | eval value_diff = values.in65To127OctetFrames - previous_value | table _time tags.interface_name value_diff I am very new to splunk. Could someone help me to write a proper SPL? Many thanks, Kenji
Hello everyone, I use the Delta command in splunk enterprise to record the power consumption of a device. This only gives me the difference in consumption. Now, however, I want to add 3 more devices... See more...
Hello everyone, I use the Delta command in splunk enterprise to record the power consumption of a device. This only gives me the difference in consumption. Now, however, I want to add 3 more devices to the same diagram, so the whole thing should be added up to a total consumption. Is this possible with Delta, and if so, how? Which commands do I need for this? Greetings Alex
i have three drop down lists. one with component(A,B,C,D). other dropdown with severity(Info,Warning) and colour dropdown list. If i select A,Info - colour dropdownlsit should be shown if i select ... See more...
i have three drop down lists. one with component(A,B,C,D). other dropdown with severity(Info,Warning) and colour dropdown list. If i select A,Info - colour dropdownlsit should be shown if i select B,Info - colour dropdownlist should not be shown. how can i achieve this?
Hello, The Event Timeline Viz is well suited for some work I am doing for a customer to understand jobs/alerts. I have discovered that it appears to display based on Computer/Browser timezone a... See more...
Hello, The Event Timeline Viz is well suited for some work I am doing for a customer to understand jobs/alerts. I have discovered that it appears to display based on Computer/Browser timezone and NOT Splunk timezone settings in user preferences, which doesn't agree with everything else in Splunk and will be difficult/confusing to explain.  If I change my Splunk user preference timezone, and was surprised to find that the Event Timeline Viz does not change the displayed times. Other visualizations and Splunk time does change for the same search results. If I change my Computer timezone, Event Timeline viz does change how time is displayed and the "Now" line reflects that the new computer timezone.  I emailed the author but wanted to post here to see if anyone else had seen this issue and/or addressed it. Screenshot below was taken with Splunk UI set to Pacific/Honolulu time. The Now line aligns with computer timezone, not Splunk. Thanks in advance for the help. Ian
Hello, I need to create a simple alert that would satisfy the below DOD STIG: SPLK-CL-000320 - Splunk Enterprise must be configured to notify the System Administrator (SA) and Information System Se... See more...
Hello, I need to create a simple alert that would satisfy the below DOD STIG: SPLK-CL-000320 - Splunk Enterprise must be configured to notify the System Administrator (SA) and Information System Security Officer (ISSO), at a minimum, when an attack is detected on multiple devices and hosts within its scope of coverage. We do not have a budget to buy the paid splunk security app. But we haveSPLUNK enterprise 9 installed.  Moreover, we are inside an intranet so attacks, if any, would be minimal. therefore, I would like to get any ideas of what would be considered an attack? for example I have the below ideas myself: 1. user logs in but is denied access for whatever reason. 2. user attempts to open a file he/she does not have rights to. I am experienced on splunk more than linux security so any help would be apreciated.    
Hi All, I have a report running every 6 hour with below search query. This is fetching hourly availability of haproxy backends based on http response code as shown below. I need to accelerate this ... See more...
Hi All, I have a report running every 6 hour with below search query. This is fetching hourly availability of haproxy backends based on http response code as shown below. I need to accelerate this report, but I think the bucket section of the search is disqualifying this for report acceleration. Can someone help with modifying this search so that it can be accelerated or are there any other work arounds to do this to get the exact same table as shown?   index=haproxy (backend="backend1" OR backend="backend2") | bucket _time span=1h | eval result=if(status >= 500, "Failure", "Success") | stats count(result) as totalcount, count(eval(result="Success")) as success, count(eval(result="Failure")) as failure by backend, _time | eval availability=tostring(round((success/totalcount)*100,3)) + "%" | fields _time, backend, success, failure, totalcount, availability   _time backend success failure totalcount availability 2024-06-07 04:00 backend1 28666 0 28666 100.000% 2024-06-07 05:00 backend1 28666 0 28666 100.000% 2024-06-07 06:00 backend1 28712 0 28712 100.000% 2024-06-07 07:00 backend1 28697 0 28697 100.000% 2024-06-07 08:00 backend1 28678 0 28678 100.000% 2024-06-07 09:00 backend1 28714 0 28714 100.000% 2024-06-07 04:00 backend2 618 0 618 100.000% 2024-06-07 05:00 backend2 179 0 179 100.000% 2024-06-07 06:00 backend2 555 0 555 100.000% 2024-06-07 07:00 backend2 103 0 103 100.000% 2024-06-07 08:00 backend2 1039 0 1039 100.000%
hello all, I have tried many times to install adds on but it does not accept my password which I know for sure is correct. I tried to reset password and try but same result. anybody have an idea how... See more...
hello all, I have tried many times to install adds on but it does not accept my password which I know for sure is correct. I tried to reset password and try but same result. anybody have an idea how to fix this?
hello, I have 2 files that contains the path of the root Certificate Authority that issued my server certificate. Not sure if the above is logically correct but i do have 2 files that have the path... See more...
hello, I have 2 files that contains the path of the root Certificate Authority that issued my server certificate. Not sure if the above is logically correct but i do have 2 files that have the paths to the certificate authority, and I must use both of them somehow/someway in web.conf. my web.conf settings are similar to the below: ### START SPLUNK WEB USING HTTPS:8443 ### enableSplunkWebSSL = 1 httpport = 8443 ### SSL CERTIFICATE FILES ### privKeyPath = $SPLUNK_HOME\etc\auth\DOD.web.certificates\privkey.pem serverCert = $SPLUNK_HOME\etc\auth\DOD.web.certificates\cert.pem sslRootCAPath = However, for "sslRootCAPath =", i need to add both of the files i have. However, the splunk documentation does not specify how I would be able to add these files, or if I can add them with ",", or if there is another property that would allow additional files with certificate authority paths. the files and sample of their content are below. Basically they are made-up of "noise" except for the header and footer section of the file. So not sure how I would combine them into a single file if that is what splunk would like. ca-200.pem -----BEGIN CERTIFICATE----- MIIEjzCCA3egAwIBAgICAwMwDQYJKoZIhvcNAQELBQAwWzELMAkGA1UEBhMCVVMxGDAWBgNVBAoT -----END CERTIFICATE----- MyCompanyRootCA3.pem -----BEGIN CERTIFICATE----- MIIDczCCAlugAwIBAgIBATANBgkqhkiG9w0BAQsFADBbMQswCQYDVQQGEwJVUzEY -----END CERTIFICATE---- any help is apreciated.