All Topics

Top

All Topics

Apologies if this is a repetitive question, but I couldn't find information anywhere. We have the Spunk OnCall (VictorOps) Slack integration setup, but when an oncall change happens, it doesn't actu... See more...
Apologies if this is a repetitive question, but I couldn't find information anywhere. We have the Spunk OnCall (VictorOps) Slack integration setup, but when an oncall change happens, it doesn't actually notify the user in slack, even when the slack user is linked to their VictorOps account. Instead, just a message appears in the relevant channel with their VictorOps username. Am I missing a place where this can be configured properly?
I have Linux audit records that have a field called type and fields with the naming convention lower(type).field.  I want to be able to combine type, as a prefix, and a set of suffixes to create new ... See more...
I have Linux audit records that have a field called type and fields with the naming convention lower(type).field.  I want to be able to combine type, as a prefix, and a set of suffixes to create new field names that exist in the data.  For example, I have a type called FILE_WATCH and fields called file_watch.exe, file_watch.egid, file_watch.comm, etc. I want to develop a dashboard table by type and suffix in Splunk to show if a particular field exists for a type.  So going back to my example using type=FILE_WATCH, how can I create a new field name along these lines? base = lower(type) exe={base}.".exe"  # does not work, but you get the idea. with exe now equal to the field name, I want to be able to de-reference the new field name to see if it exists.
Looking for help with this rex command. I want to capture the continuous string after "invalid user" whether it has special characters or not. Here are some examples from my data set (abc is just an ... See more...
Looking for help with this rex command. I want to capture the continuous string after "invalid user" whether it has special characters or not. Here are some examples from my data set (abc is just an example, it could be any word or character)  invalid user abc invalid user abc@def invalid user $abc invalid user abc\def invalid user abc-def If I run the below, I am able to successfully extract the invaliduser if it is a word. But this does not work if there is a special character base search | rex "invalid user (?<invaliduser>\w+) " I have figured out how to extract if there is a leading special character (W+\w+) or a special character in the middle (w+\W+\w+) but those aren't exactly what I'm looking for. Is there a single rex command I can use to capture all possible results? 
hello I am pretty new using Splunk and I am being tasked to generate multiple of these kinds of reports in Splunk (original reports were from a SQL tool)  I really need help in finding the right que... See more...
hello I am pretty new using Splunk and I am being tasked to generate multiple of these kinds of reports in Splunk (original reports were from a SQL tool)  I really need help in finding the right query for this. Especially how to include certain users and exclude others.  your help is greatly appreciated!! ————- * Collect all available log sources. * Generate a report that shows Changes to System Sec Config events occurred on the previous day, grouped by source users.    • Format: .csv, List of events , table with subset of fields (User, Date/Time, Event, Group, oHost, Host (Impacted), oLogin, VendorMsgID, Domain Impacted), Grouped by User • Schedule: daily • Search window: -24 hours • Expiration: 30 days   # Technical Context The following events are of interest Vendor Message IDs - 4727, 4728, 4729, 4730, 4731, 4732, 4733, 4734, 4735, 4736, 4737, 4740, 4754, 4755, 4756, 4757, 4758, 4759, 4783, 4784, 4785, 4786, 4787, 4788, 4789, 4791, 631 AND User is NOT xxx, system, xxx, xxxx, xxxxx,  AND User (Impacted) IS NOT (res group name)  AND Host (Impacted) IS NOT %sc% (SQL PATTERN), %sd% (SQL PATTERN), ^sc.+, ^sd.+
Hello,  I know that  mvsort command sort values lexicographically. But I want the output as below: 62.0.3.75 63.0.3.84 75.0.3.80 92.0.4.159 119.0.6.159 @ITWhisperer 
Hi,  I have two datasets for example – 1.Index=abc host=def_inven, consider as Dataset A (inventory with 100 servers) and 2.lookup = something, consider as Dataset B (monitored in Splunk with 80 s... See more...
Hi,  I have two datasets for example – 1.Index=abc host=def_inven, consider as Dataset A (inventory with 100 servers) and 2.lookup = something, consider as Dataset B (monitored in Splunk with 80 servers). How can I identify the 20 servers missing ? 
Hi, If I have process Events like PID | ProcessName |  CommandLine | SpawnedByPID 100 | process_1 | process_1_commandLine | 99 101 | process_2 | process_2_commandLine | 100 200 | process_3 |  ... See more...
Hi, If I have process Events like PID | ProcessName |  CommandLine | SpawnedByPID 100 | process_1 | process_1_commandLine | 99 101 | process_2 | process_2_commandLine | 100 200 | process_3 |  process_3_commandLine | 199 201 |  process_4 |  process_4_commandLine | 200 Is there any Viz that will map processes in some Folder/EDR like tree (where I can also click on node and get mora info). For example, final results are based on PID but Viz looks like something like | -> process_name_99 |----> process_1 (on hower or Click will get token process_1_commandLine) |--------> process_2 | -> process_name_99 |----> process_3 |-------->process_4 Something like psTree just more advanced and connected by PID not names.
Hi! Is it possible to report errors without throwing an exception / crashing the app? I'd like to report some custom user data for certain events like here described, without throwing an exception h... See more...
Hi! Is it possible to report errors without throwing an exception / crashing the app? I'd like to report some custom user data for certain events like here described, without throwing an exception https://docs.appdynamics.com/appd/23.x/23.6/en/end-user-monitoring/mobile-real-user-monitoring/instrument-android-applications/customize-the-android-instrumentation#id-.CustomizetheAndroidInstrumentationv23.2-user-dataCustomUserData  I tried the following, but it wasn't reported, nor I could watch it in crashes view Instrumentation.setUserData("Custom_event_key", "Some event happened"); Instrumentation.reportError(e, ErrorSeverityLevel.CRITICAL); If this is possible, where can I monitor that data in AppDynamics? Or is this just extra data which will only be added to crash reports?
Hello Team, Everyone has probably seen this error.  Error in 'TsidxStats': _time aggregations are not yet supported except for count/min/max/range/earliest/latest I try to understand stats command... See more...
Hello Team, Everyone has probably seen this error.  Error in 'TsidxStats': _time aggregations are not yet supported except for count/min/max/range/earliest/latest I try to understand stats command use which fields.  I don't want to try every field. Can I see this fields list on GUI or CLI?
Hi, from the logs, i have extracted the below data(table1). I would like to add another column as in Table2 with custom keyword if filename begins xyz then "Core". Please could you suggest what s... See more...
Hi, from the logs, i have extracted the below data(table1). I would like to add another column as in Table2 with custom keyword if filename begins xyz then "Core". Please could you suggest what splunk query or logic we could apply? Splunk Query: base search | rex field User | rex field Folder | rex filed File | table User Folder File Table1:  User Folder File ABC first xyz07122023   Table 2: Required Output User Folder File Consumer ABC first xyz07122023 Core
Hi all ! I display a map in a dashboard. The map only contains 1 point I need to center the map dynamically for this point. But I can't do it because I'm not able to insert into the "center" value ... See more...
Hi all ! I display a map in a dashboard. The map only contains 1 point I need to center the map dynamically for this point. But I can't do it because I'm not able to insert into the "center" value other value than integer (I tried token for example) How can I perform it ? Thank you a lot     My search :      "ds_search_1_new": { "type": "ds.search", "options": { "query": "xxx | table lon lat", "queryParameters": { "earliest": "$time.earliest$", "latest": "$time.latest$" }, "enableSmartSources": true }, "name": "mapSearch" }     My visualization :      "viz_map_1": { "type": "splunk.map", "options": { "zoom": 0, "layers": [ { "type": "marker", "latitude": "> primary | seriesByName('lat')", "longitude": "> primary | seriesByName('lon')", "bubbleSize": "> primary | seriesByName('lat')" } ], "center": [ 0, 0 ] }, "dataSources": { "primary": "ds_search_1_new" }, "title": "" }      
Hello team I am facing issue in setting up cloud like architecture using docker-splunk I am following this page: https://github.com/splunk/docker-splunk/blob/develop/docs/advanced/DISTRIBUTED_TOPOL... See more...
Hello team I am facing issue in setting up cloud like architecture using docker-splunk I am following this page: https://github.com/splunk/docker-splunk/blob/develop/docs/advanced/DISTRIBUTED_TOPOLOGY.md And I am getting error in starting SH and CM containers getting below error on sh1   fatal: [localhost]: FAILED! => { "attempts": 60, "changed": false, "cmd": [ "/opt/splunk/bin/splunk", "init", "shcluster-config", "-auth", "admin:Abc@1234", "-mgmt_uri", "https://sh1:8089", "-replication_port", "9887", "-replication_factor", "2", "-conf_deploy_fetch_url", "https://dep1:8089", "-secret", "", "-shcluster_label", "shc_label" ], "delta": "0:00:00.593771", "end": "2023-12-06 07:05:46.787788", "rc": 22, "start": "2023-12-06 07:05:46.194017" } STDERR: WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Required parameter secret does not have a value.   And error on starting cm1 container   fatal: [localhost]: FAILED! => { 2023-12-07 11:02:09 "attempts": 5, 2023-12-07 11:02:09 "changed": false, 2023-12-07 11:02:09 "cmd": [ 2023-12-07 10:59:48 core/2.11/user_guide/become.html#risks-of-becoming-an-unprivileged-user 2023-12-07 10:59:49 [WARNING]: Using world-readable permissions for temporary files Ansible needs 2023-12-07 10:59:49 to create when becoming an unprivileged user. This may be insecure. For 2023-12-07 10:59:49 information on securing this, see https://docs.ansible.com/ansible- 2023-12-07 10:59:49 core/2.11/user_guide/become.html#risks-of-becoming-an-unprivileged-user 2023-12-07 10:59:49 [WARNING]: Using world-readable permissions for temporary files Ansible needs 2023-12-07 10:59:49 to create when becoming an unprivileged user. This may be insecure. For 2023-12-07 10:59:49 information on securing this, see https://docs.ansible.com/ansible- 2023-12-07 10:59:49 core/2.11/user_guide/become.html#risks-of-becoming-an-unprivileged-user 2023-12-07 10:59:49 [WARNING]: Using world-readable permissions for temporary files Ansible needs 2023-12-07 10:59:49 to create when becoming an unprivileged user. This may be insecure. For 2023-12-07 10:59:49 information on securing this, see https://docs.ansible.com/ansible- 2023-12-07 10:59:49 core/2.11/user_guide/become.html#risks-of-becoming-an-unprivileged-user 2023-12-07 11:02:09 "/opt/splunk/bin/splunk", 2023-12-07 11:02:09 "start", 2023-12-07 11:02:09 "--accept-license", 2023-12-07 11:02:09 "--answer-yes", 2023-12-07 11:02:09 "--no-prompt" 2023-12-07 11:02:09 ], 2023-12-07 11:02:09 "delta": "0:00:15.870844", 2023-12-07 11:02:09 "end": "2023-12-07 05:32:09.015177", 2023-12-07 11:02:09 "rc": 1, 2023-12-07 11:02:09 "start": "2023-12-07 05:31:53.144333" 2023-12-07 11:02:09 } 2023-12-07 11:02:09 2023-12-07 11:02:09 STDOUT: 2023-12-07 11:02:09 2023-12-07 11:02:09 2023-12-07 11:02:09 Splunk> Take the sh out of IT. 2023-12-07 11:02:09 2023-12-07 11:02:09 Checking prerequisites... 2023-12-07 11:02:09 Checking http port [8000]: open 2023-12-07 11:02:09 Checking mgmt port [8089]: open 2023-12-07 11:02:09 Checking appserver port [127.0.0.1:8065]: open 2023-12-07 11:02:09 Checking kvstore port [8191]: open 2023-12-07 11:02:09 Checking configuration... Done. 2023-12-07 11:02:09 Checking critical directories... Done 2023-12-07 11:02:09 Checking indexes... 2023-12-07 11:02:09 Validated: _audit _configtracker _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket history main summary 2023-12-07 11:02:09 Done 2023-12-07 11:02:09 Checking filesystem compatibility... Done 2023-12-07 11:02:09 Checking conf files for problems... 2023-12-07 11:02:09 Done 2023-12-07 11:02:09 Checking default conf files for edits... 2023-12-07 11:02:09 Validating installed files against hashes from '/opt/splunk/splunk-9.1.2-b6b9c8185839-linux-2.6-x86_64-manifest' 2023-12-07 11:02:09 All installed files intact. 2023-12-07 11:02:09 Done 2023-12-07 11:02:09 All preliminary checks passed. 2023-12-07 11:02:09 2023-12-07 11:02:09 Starting splunk server daemon (splunkd)... 2023-12-07 11:02:09 Done 2023-12-07 11:02:09 2023-12-07 11:02:09 2023-12-07 11:02:09 Waiting for web server at http://127.0.0.1:8000 to be available............ 2023-12-07 11:02:09 2023-12-07 11:02:09 WARNING: web interface does not seem to be available! 2023-12-07 11:02:09 2023-12-07 11:02:09 2023-12-07 11:02:09 STDERR: 2023-12-07 11:02:09 2023-12-07 11:02:09 PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security 2023-12-07 11:02:09 2023-12-07 11:02:09 2023-12-07 11:02:09 MSG: 2023-12-07 11:02:09 2023-12-07 11:02:09 non-zero return code 2023-12-07 11:02:09 2023-12-07 11:02:09 PLAY RECAP ********************************************************************* 2023-12-07 11:02:09 localhost : ok=60 changed=2 unreachable=0 failed=1 skipped=48 rescued=0 ignored=0 2023-12-07 11:02:09   I am using this yaml file   version: "3.6" networks: splunknet: driver: bridge attachable: true services: sh1: networks: splunknet: aliases: - sh1 image: ${SPLUNK_IMAGE:-splunk/splunk:latest} hostname: sh1 container_name: sh1 environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_INDEXER_URL=idx1,idx2,idx3,idx4 - SPLUNK_SEARCH_HEAD_URL=sh2,sh3 - SPLUNK_SEARCH_HEAD_CAPTAIN_URL=sh1 - SPLUNK_CLUSTER_MASTER_URL=cm1 - SPLUNK_ROLE=splunk_search_head_captain - SPLUNK_DEPLOYER_URL=dep1 - SPLUNK_PASSWORD=Abc@1234 - SPLUNK_LICENSE_URI=/tmp/defaults/splunk_license_expire_on_January_02_2024.License - SPLUNK_APPS_URL - DEBUG=true ports: - 8000 - 8089 volumes: - ./defaults:/tmp/defaults sh2: networks: splunknet: aliases: - sh2 image: ${SPLUNK_IMAGE:-splunk/splunk:latest} hostname: sh2 container_name: sh2 environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_INDEXER_URL=idx1,idx2,idx3,idx4 - SPLUNK_SEARCH_HEAD_URL=sh2,sh3 - SPLUNK_SEARCH_HEAD_CAPTAIN_URL=sh1 - SPLUNK_CLUSTER_MASTER_URL=cm1 - SPLUNK_ROLE=splunk_search_head - SPLUNK_DEPLOYER_URL=dep1 - SPLUNK_PASSWORD=Abc@1234 - SPLUNK_LICENSE_URI=/tmp/defaults/splunk_license_expire_on_January_02_2024.License - SPLUNK_APPS_URL - DEBUG=true ports: - 8000 - 8089 volumes: - ./defaults:/tmp/defaults sh3: networks: splunknet: aliases: - sh3 image: ${SPLUNK_IMAGE:-splunk/splunk:latest} hostname: sh3 container_name: sh3 environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_INDEXER_URL=idx1,idx2,idx3,idx4 - SPLUNK_SEARCH_HEAD_URL=sh2,sh3 - SPLUNK_SEARCH_HEAD_CAPTAIN_URL=sh1 - SPLUNK_CLUSTER_MASTER_URL=cm1 - SPLUNK_ROLE=splunk_search_head - SPLUNK_DEPLOYER_URL=dep1 - SPLUNK_PASSWORD=Abc@1234 - SPLUNK_LICENSE_URI=/tmp/defaults/splunk_license_expire_on_January_02_2024.License - SPLUNK_APPS_URL - DEBUG=true ports: - 8000 - 8089 volumes: - ./defaults:/tmp/defaults dep1: networks: splunknet: aliases: - dep1 image: ${SPLUNK_IMAGE:-splunk/splunk:latest} hostname: dep1 container_name: dep1 environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_INDEXER_URL=idx1,idx2,idx3,idx4 - SPLUNK_SEARCH_HEAD_URL=sh2,sh3 - SPLUNK_SEARCH_HEAD_CAPTAIN_URL=sh1 - SPLUNK_CLUSTER_MASTER_URL=cm1 - SPLUNK_ROLE=splunk_deployer - SPLUNK_DEPLOYER_URL=dep1 - SPLUNK_PASSWORD=Abc@1234 - SPLUNK_LICENSE_URI - SPLUNK_APPS_URL - DEBUG=true ports: - 8000 - 8089 volumes: - ./defaults:/tmp/defaults cm1: networks: splunknet: aliases: - cm1 image: ${SPLUNK_IMAGE:-splunk/splunk:latest} hostname: cm1 container_name: cm1 environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_INDEXER_URL=idx1,idx2,idx3,idx4 - SPLUNK_SEARCH_HEAD_URL=sh2,sh3 - SPLUNK_SEARCH_HEAD_CAPTAIN_URL=sh1 - SPLUNK_CLUSTER_MASTER_URL=cm1 - SPLUNK_ROLE=splunk_cluster_master - SPLUNK_DEPLOYER_URL=dep1 - SPLUNK_PASSWORD=Abc@1234 - SPLUNK_LICENSE_URI - SPLUNK_APPS_URL - DEBUG=true ports: - 8000 - 8089 volumes: - ./defaults:/tmp/defaults idx1: networks: splunknet: aliases: - idx1 image: ${SPLUNK_IMAGE:-splunk/splunk:latest} hostname: idx1 container_name: idx1 environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_INDEXER_URL=idx1,idx2,idx3,idx4 - SPLUNK_SEARCH_HEAD_URL=sh2,sh3 - SPLUNK_SEARCH_HEAD_CAPTAIN_URL=sh1 - SPLUNK_CLUSTER_MASTER_URL=cm1 - SPLUNK_ROLE=splunk_indexer - SPLUNK_DEPLOYER_URL=dep1 - SPLUNK_PASSWORD=Abc@1234 - SPLUNK_LICENSE_URI - SPLUNK_APPS_URL - DEBUG=true ports: - 8000 - 8089 volumes: - ./defaults:/tmp/defaults idx2: networks: splunknet: aliases: - idx2 image: ${SPLUNK_IMAGE:-splunk/splunk:latest} hostname: idx2 container_name: idx2 environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_INDEXER_URL=idx1,idx2,idx3,idx4 - SPLUNK_SEARCH_HEAD_URL=sh2,sh3 - SPLUNK_SEARCH_HEAD_CAPTAIN_URL=sh1 - SPLUNK_CLUSTER_MASTER_URL=cm1 - SPLUNK_ROLE=splunk_indexer - SPLUNK_DEPLOYER_URL=dep1 - SPLUNK_PASSWORD=Abc@1234 - SPLUNK_LICENSE_URI - SPLUNK_APPS_URL - DEBUG=true ports: - 8000 - 8089 volumes: - ./defaults:/tmp/defaults idx3: networks: splunknet: aliases: - idx3 image: ${SPLUNK_IMAGE:-splunk/splunk:latest} hostname: idx3 container_name: idx3 environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_INDEXER_URL=idx1,idx2,idx3,idx4 - SPLUNK_SEARCH_HEAD_URL=sh2,sh3 - SPLUNK_SEARCH_HEAD_CAPTAIN_URL=sh1 - SPLUNK_CLUSTER_MASTER_URL=cm1 - SPLUNK_ROLE=splunk_indexer - SPLUNK_DEPLOYER_URL=dep1 - SPLUNK_PASSWORD=Abc@1234 - SPLUNK_LICENSE_URI - SPLUNK_APPS_URL - DEBUG=true ports: - 8000 - 8089 volumes: - ./defaults:/tmp/defaults idx4: networks: splunknet: aliases: - idx4 image: ${SPLUNK_IMAGE:-splunk/splunk:latest} hostname: idx4 container_name: idx4 environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_INDEXER_URL=idx1,idx2,idx3,idx4 - SPLUNK_SEARCH_HEAD_URL=sh2,sh3 - SPLUNK_SEARCH_HEAD_CAPTAIN_URL=sh1 - SPLUNK_CLUSTER_MASTER_URL=cm1 - SPLUNK_ROLE=splunk_indexer - SPLUNK_DEPLOYER_URL=dep1 - SPLUNK_PASSWORD=Abc@1234 - SPLUNK_LICENSE_URI - SPLUNK_APPS_URL - DEBUG=true ports: - 8000 - 8089 volumes: - ./defaults:/tmp/defaults   Can someone help me resolve this?
Hi Team, I need to configure Splunk alert to notify us in case of no logs updated on given server or many servers more than an hour and below are requirements: 1. Totally 40 servers require monito... See more...
Hi Team, I need to configure Splunk alert to notify us in case of no logs updated on given server or many servers more than an hour and below are requirements: 1. Totally 40 servers require monitoring 2. Each server has an average 3 log paths NOTE: Seen existing solution where config is meant for single server host; I need amicable solution to cover all 40 servers. Please let me know if anything.
<label>Test</label>   <init>     <unset token="msg"></unset>     <unset token="form.msg"></unset>     <set token="showResult">true</set>   </init>   <fieldset submitButton="false"></fieldset> ... See more...
<label>Test</label>   <init>     <unset token="msg"></unset>     <unset token="form.msg"></unset>     <set token="showResult">true</set>   </init>   <fieldset submitButton="false"></fieldset>   <row>     <panel id="logo">       <html>         <p>           <img src="/static/app/CIS-CM)-CERT-Portal/cis-new.jpg" alt="Cis"/>         </p>       </html>     </panel>     <panel id="details">       <html>         <h>           <b> Check Report</b>         </h>         <div id="desc">           <p>             This report displays the  Check details.           </p>         </div>       </html>     </panel>   </row>   <row>     <panel id="hideshow">       <html depends="$hideglo$">         <a id="show">Show Filters</a>       </html>       <html rejects="$hideglo$">           <a id="hide">Hide Filters</a>       </html>     </panel>   </row>   <row>      <panel id="global" rejects="$hideglo$">       <input type="dropdown" id="orgselect" token="org" searchWhenChanged="false">         <label>Organization</label>         <showClearButton>false</showClearButton>         <search>           <query>|  `orgList`</query>           <earliest>0</earliest>           <latest>now</latest>         </search>         <fieldForLabel>cust_name</fieldForLabel>         <fieldForValue>cust_name</fieldForValue>         <prefix>em7_cust_name="</prefix>         <suffix>" em7_cust_name!=Cisco </suffix>       </input>       <input type="dropdown" id="region" token="region" searchWhenChanged="false">         <label>Region*</label>     <showClearButton>false</showClearButton>         <selectFirstChoice>true</selectFirstChoice>         <search>           <query>             |inputlookup cert_groups_lookup             | lookup cert_servers_lookup group_id OUTPUTNEW em7_org_id             | mvexpand em7_org_id             | dedup em7_org_id,group_id             | search em7_org_id="$cust_id$"             | sort 0 group_name           </query>           <earliest>0</earliest>           <latest>now</latest>         </search>         <fieldForLabel>group_name</fieldForLabel>         <fieldForValue>group_id</fieldForValue>         <prefix>group_id="</prefix>         <suffix>"</suffix>       </input>
Hi There!    I'm facing the error "Search is waiting for the input" <form stylesheet="dashboard.css,infobutton.css" script="multiselect_functions.js,infobutton.js" version="1.1" theme="dark"> ... See more...
Hi There!    I'm facing the error "Search is waiting for the input" <form stylesheet="dashboard.css,infobutton.css" script="multiselect_functions.js,infobutton.js" version="1.1" theme="dark"> <label>Agent Operational Dashboard</label> <description>v4.3</description> <init> <set token="agent_index">1T</set> <set token="console_stand_scope">OR `console_stand(*)`</set> <set token="form.cacp">*</set> <set token="form.sap">*</set> <set token="form.origin">*</set> </init> <search id="init"> <done> <condition match="isnull($scope$) OR $scope$ == &quot;agent_console_&quot;"> <set token="cmdb_scope">*</set> </condition> <condition match="$scope$ == &quot;agent_cmdb_&quot;"> <set token="cmdb_scope">IN</set> </condition> </done> <query> | makeresults </query> <earliest>$search_start$</earliest> <latest>$search_end$</latest> </search> <search> <query> | makeresults | eval LimitVersion_ens=`get_obsolete_version(Agent_Endpoint_Security)` | eval LimitVersion_agent=`get_obsolete_version(Agent_Agent)` </query> <done> <set token="ens_obsolete_version">$result.LimitVersion_ens$</set> <set token="agent_obsolete_version">$result.LimitVersion_agent$</set> </done> </search> <search id="compliance_agent"> <query> `compliance_agent_op("agent_index_source IN($agent_index$) $console_stand_scope$", now(), $timerange$, agent,$machine$, $scope$, $origin$, $country$, $cacp$, $sap$)` </query> <earliest>$search_start$</earliest> <latest>$search_end$</latest> </search> <search id="compliance_all_agent"> <query> `compliance_agent_op("`agent_scope_filter($cmdb_scope$)`", now(), $timerange$, agent,$machine$, $scope$, $origin$, $country$, $cacp$, $sap$)` </query> <earliest>$search_start$</earliest> <latest>$search_end$</latest> </search> <search> <done> <set token="search_start">$result.search_start$</set> <set token="search_end">$result.search_end$</set> </done> <query>| makeresults | fields - _time | eval now=now() | eval prev_day=if(strftime(now, "%a")="Mon" AND "$weekends$"="exclude", -3, -1) | eval search_start=relative_time(now, prev_day."d@d") | eval search_end=search_start + 86400</query> </search> <fieldset submitButton="false" autoRun="true"> <input type="multiselect" token="agent_index" searchWhenChanged="true"> <label>Choose Agent console</label> <choice value="1T,2A*,2S">All</choice> <choice value="1T">Agent Stand</choice> <choice value="2A*">Agent Scad</choice> <choice value="2S">Agent SCAPA</choice> <default>1T</default> <initialValue>1T</initialValue> <delimiter>, </delimiter> <change> <set token="agent_index_label">$label$</set> </change> <change> <condition match="like($agent_index$,&quot;%1T23%&quot;)"> <set token="console_stand_scope">OR `console_stand($cmdb_scope$)`</set> </condition> <condition match="!like($agent_index$,&quot;%1T23%&quot;)"> <set token="console_stand_scope"></set> </condition> </change> </input> <input type="dropdown" token="timerange" searchWhenChanged="true"> <label>Last Communication</label> <choice value="-1d@d">Previous day</choice> <choice value="-7d@d">Last 7 days</choice> <choice value="-15d@d">Last 15 days</choice> <choice value="-21d@d">Last 21 days</choice> <choice value="-30d@d">Last 30 days</choice> <choice value="-3mon">Last 3 months</choice> <choice value="-6mon">Last 6 months</choice> <choice value="-12mon">Last 1 year</choice> <change> <eval token="time_timechart">case($value$ == "-1d@d","1",$value$ == "-7d@d","2",$value$ == "-15d@d","3",$value$ == "-21d@d","4",$value$ == "-30d@d","5",$value$ == "-3mon","6",$value$ == "-6mon","7",$value$ == "-12mon","8")</eval> </change> <default>-15d@d</default> <initialValue>-15d@d</initialValue> </input> <input type="radio" token="origin" searchWhenChanged="true"> <label>Location</label> <choice value="*">All Locations</choice> <choice value="NAT">NAT</choice> <choice value="ROO">ROO</choice> <default>*</default> <initialValue>*</initialValue> <change> <unset token="form.country"></unset> </change> </input> <input type="multiselect" token="country" searchWhenChanged="true"> <label>Country</label> <search> <query>| inputlookup b1a_asset_country.csv where nat_roo="$origin$" | dedup country | fields country </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <delimiter> </delimiter> <fieldForLabel>country</fieldForLabel> <fieldForValue>country</fieldForValue> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="multiselect" token="machine" searchWhenChanged="true"> <label>Machine type</label> <choice value="*">All</choice> <choice value="VDI">VDI</choice> <choice value="Industrial">Industrial</choice> <choice value="Stand">Stand</choice> <choice value="MacOS">MacOS</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="radio" token="business_assets" searchWhenChanged="true"> <label>Business assets</label> <choice value="*">All assets</choice> <choice value="cacp">CACP</choice> <choice value="sap">SAP</choice> <default>*</default> <initialValue>*</initialValue> <change> <condition match="$business_assets$ == &quot;cacp&quot;"> <set token="cacp">true</set> <set token="sap">*</set> </condition> <condition match="$business_assets$ == &quot;sap&quot;"> <set token="sap">true</set> <set token="cacp">*</set> </condition> <condition match="$business_assets$ == &quot;*&quot;"> <set token="sap">*</set> <set token="cacp">*</set> </condition> </change> </input> <input type="dropdown" token="scope" searchWhenChanged="true"> <label>Scope</label> <choice value="agent_console_">Agent Console</choice> <choice value="agent_cmdb_">CMDB</choice> <default>agent_console_</default> <initialValue>agent_console_</initialValue> <change> <condition match="$scope$ == &quot;agent_console_&quot;"> <unset token="cmdb_scope"></unset> <set token="cmdb_scope">*</set> </condition> <condition match="$scope$ == &quot;agent_cmdb_&quot;"> <unset token="cmdb_scope"></unset> <set token="cmdb_scope">IN</set> </condition> </change> </input> <input type="multiselect" token="office_filter" searchWhenChanged="true"> <label>Front/Back office (only Stand Global compliance)</label> <choice value="Front Office">Front Office</choice> <choice value="Back Office">Back Office</choice> <initialValue>Front Office,Back Office</initialValue> <default>Front Office,Back Office</default> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>, </delimiter> <change> <eval token="office_filter_drilldown">replace($form.office_filter$ + "","([^,]+),?","&amp;form.office_filter=$1")</eval> </change> </input> <input type="radio" token="weekends" searchWhenChanged="true"> <label>Weekends</label> <choice value="exclude">Exclude Weekends</choice> <choice value="include">Include Weekends</choice> <default>exclude</default> <initialValue>exclude</initialValue> </input> </fieldset> <row> <panel> <title>Full Perimeter Compliance (all EPO)</title> <chart> <title>All Consoles</title> <search base="compliance_all_agent"> <query>| chart count by $scope$global_compliance | sort $scope$global_compliance</query> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">all</option> <option name="charting.fieldColors">{"Compliant":0x55AA55,"Non Compliant":0xCC0000","Not Applicable":"0xFFC300 "}</option> <option name="charting.seriesColors">[0x55AA55, 0xCC0000]</option> <option name="refresh.display">progressbar</option> <drilldown> <link target="_blank">/app/agent_operational_antivirus_details?form.compliance_filter=$click.value$&amp;form.agent_index=*&amp;form.timerange=$timerange$&amp;form.antivirus_filter=*&amp;form.machine=$machine$&amp;form.origin=$origin$&amp;form.country=$country$&amp;form.business_assets=$business_assets$&amp;form.scope=$scope$</link> </drilldown> </chart> </panel> </row> Thanks in Advance!!!!
I created a kvstore on the search header cluster. When I clean up the environment and want to use API or CLI to delete and recreate the KVStore, I find that the data of the KVStore will recover on ... See more...
I created a kvstore on the search header cluster. When I clean up the environment and want to use API or CLI to delete and recreate the KVStore, I find that the data of the KVStore will recover on its own after a period of time. Why is this? BR
     i have three events like received message class.if you seee the pic,you will be seeing 3 event for each customer .each event have customerordernumber.i want to check for each and every custom... See more...
     i have three events like received message class.if you seee the pic,you will be seeing 3 event for each customer .each event have customerordernumber.i want to check for each and every customer I have all three event message in the splunk log.how to write splunk query for that.
Hello, we have PDF mail delivery scheduled every evening however sendemail may fail (mail server error for instance without error in Splunk search.log). How to reschedule PDF delivery for yesterd... See more...
Hello, we have PDF mail delivery scheduled every evening however sendemail may fail (mail server error for instance without error in Splunk search.log). How to reschedule PDF delivery for yesterday's data WITHOUT modifying user's dashboard? Thanks.
trying to get a script working  import urllib.request, json try: with urllib.request.urlopen("XXXXXXXXX.json") as url: data = json.loads(url.read().decode()) print("data received") for item in ... See more...
trying to get a script working  import urllib.request, json try: with urllib.request.urlopen("XXXXXXXXX.json") as url: data = json.loads(url.read().decode()) print("data received") for item in data: print(item) except urllib.error.URLError as e: print(f"Request failed with error: (e)") this works fine and fetches the data but  I need this to pass through proxy server when I try that it does not work.. any help is apprecaited.  
When I developed an add on based on Splunk's cluster, encountering some problems: 1、I created an index named test from the indexes cluster of Splunk,Also through "./splunk edit cluster-config -mode ... See more...
When I developed an add on based on Splunk's cluster, encountering some problems: 1、I created an index named test from the indexes cluster of Splunk,Also through "./splunk edit cluster-config -mode searchhead -master_uri <Indexer Cluster Master URI>"    command linked search head cluster nodes with index clusters.   I want to write data to the index of this test through the Splunk API and obtain the written data from other search header nodes, but I found that it is not working. Is this related to my previous creation of indexes in the search header node. If it's relevant, how can I remove the index from the search header cluster? 2、Is kvsore data synchronized in the search head cluster. What should I do if I want to clean up the environment and delete a KVStore in the search header cluster? 3、What is the data communication mechanism between search head clusters, and do I want to achieve some data synchronization on add on between multiple search heads? Is there any good method? BR!