All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

All, I am looking for a solution to identify the hosts that have stopped reporting to Splunk using lookup table.  However, the condition is there are Primary and Secondary hosts for some data types... See more...
All, I am looking for a solution to identify the hosts that have stopped reporting to Splunk using lookup table.  However, the condition is there are Primary and Secondary hosts for some data types. I do not want to get alerted if either of the hosts (Primary or Secondary) is reporting. At the same time I would like to map these hosts to their respective index. So if a host(both primary and secondary in some cases) from a particular index stops reporting an alert should trigger (will probably have another column for index mapping the hosts). Any solution would be highly appreciated!! 
I'm trying to calculate the data throughput for a cloud computing solution that will be charging based on outgoing data throughput. We're collecting on the link using security onion and forwarding ... See more...
I'm trying to calculate the data throughput for a cloud computing solution that will be charging based on outgoing data throughput. We're collecting on the link using security onion and forwarding those zeek logs to our splunk instance.  index="zeek" source="conn.log" ((id.orig_h IN `front end`) AND NOT (id.resp_h IN `backend`)) OR ((id.resp_h IN `front end`) AND NOT (id.orig_h IN `backend`)) | fields orig_bytes, resp_bytes | eval terabytes=(((((resp_bytes+orig_bytes)/1024)/1024)/1024)/1024) | stats sum (terabytes) This gives me traffic throughput in and out of the network for external connections however what I need is to calculate orig_bytes only when the id.orig_h is my `frontend` and resp_bytes when id.resp_h is `frontend`. I can get them separately by just doing two different searches and then adding the results up by hand. But I'm sure theres a way to do what I want to in one search using some sort of conditional. I've tried using where and eval if but I'm just not skilled enough it seems. 
I'm attempting to compute the total number of API calls from our backend engine. Initially, I process API identification text logs as events in the engine's index, enabling me to filter respective re... See more...
I'm attempting to compute the total number of API calls from our backend engine. Initially, I process API identification text logs as events in the engine's index, enabling me to filter respective request IDs. Simultaneously, I process the target_num count within the same index/source. By merging these two logs through a join operation, I filter out all relevant requests to compute the total API calls accurately, achieving the desired outcome. Subsequently, I aim to enhance this by joining the filtered request IDs with another platform's index/source. Here, I intend to determine the success or failure status of each request at the platform level and then multiply it by the original value of target_num. However, upon combining these queries, I'm experiencing discrepancies in the execution results. I'm uncertain about the missing piece causing this issue. My Final Query : <x-request-id is an existing field on platform index and there is no rex I am using> ---------------------- index=default-va6* sourcetype="myengine-stage" "API call is True for MyEngine" | rex field=_raw "request_id=(?<reqID>.+?) - " | dedup reqID | join reqID [ search index=default-va6* sourcetype="myengine-stage" "Target language count" | rex field=_raw "request_id=(?<reqID>.+?) - " | rex field=_raw "Target language count (?<num_target>\d+)" | dedup reqID | fields reqID, num_target ] | fields reqID, num_target | stats count("reqID") as total_calls by num_target | eval total_api_calls = total_calls * num_target | stats sum(total_api_calls) as Total_Requests_Received | rename reqID AS "x-request-id" | join "x-request-id" [ search index=platform-va6 sourcetype="platform-ue*" "Marked request as" | eval num_succeed = if(like(message, "Marked request as succeed%"), 1, 0) | eval num_failed = if(like(message, "Marked request as failed%"), 1, 0) | fields num_succeed, num_failed ] | fields num_succeed, num_failed | stats sum(num_succeed) as num_succeed, sum(num_failed) as num_failed | eval total_succeed_calls = num_succeed * num_target, total_failed_calls = num_failed * num_target
<search> <query>index="ourIndex" sourcetype=$stype$ABC AND Is_Service_Account="True" OR Is_Service_Account="False" earliest=-48h | eval DC=upper(DC) | eval env1=case(DC like "%Q%","QA", DC like "... See more...
<search> <query>index="ourIndex" sourcetype=$stype$ABC AND Is_Service_Account="True" OR Is_Service_Account="False" earliest=-48h | eval DC=upper(DC) | eval env1=case(DC like "%Q%","QA", DC like "%DEV%","DEV", true(), "PROD") | search env1=$envPure$ AND $domainPure$ |rename DC AS domainPure | stats count </query> <earliest>0</earliest> <latest></latest> </search>   If earliest=-48h and within the source code there is <earliest>0</earliest>, then if we enable an admission rule that disables All Time searches what would happen? 
Hi Where are the Checkpoint values for enabled DB Connect Inputs stored? I did check at folder: /opt/splunk/var/lib/splunk/modinputs/server/splunk_app_db_connect There there are only files ... See more...
Hi Where are the Checkpoint values for enabled DB Connect Inputs stored? I did check at folder: /opt/splunk/var/lib/splunk/modinputs/server/splunk_app_db_connect There there are only files with names of our disabled DB Inputs, but not the ones of our enabled DB Inputs. Splunk Enterprise Version: 9.0.4.1 Splunk DB Connect Version: 3.6.0 Ps. our three enabled DB Inputs do work correctly, and I can see the checkpoint values from the web. Just cannot find where they are stored on the OS best regards Altin
Hello,    I have a panel with a search query e.g.     <row><panel><table> <search> <query>_some_query_ | table A B C D </query> </search> </table></panel></row>        and it displays multi... See more...
Hello,    I have a panel with a search query e.g.     <row><panel><table> <search> <query>_some_query_ | table A B C D </query> </search> </table></panel></row>        and it displays multiple of rows on a dashboard. I am trying to create a button that will send all of the column C data to a different site, so I want to store column C data as a token. Is there a way to do that?  
Hello! As a newcomer to the world of IT and Cyber Security, i am having some trouble. I am trying to set up a splunk homelab environment to get some hands on experience with the application. My ... See more...
Hello! As a newcomer to the world of IT and Cyber Security, i am having some trouble. I am trying to set up a splunk homelab environment to get some hands on experience with the application. My hopeful goal is to be able to import or stream some data to a splunk dashboard to be able to mess a round and learn for starters, but eventually set up my own home network monitoring system. Ive been able to statically import some local logs and read them over, which is fine. Id like to be able to setup a better environment for detecting intrusions and analyzing for IOCs. If anyone has some helpful links or advice i would very much appreciate it!
  I need help with a splunk query to return events where an array of object contains certain value for a key in all the objects of an array Event 1: { list: [ ... See more...
  I need help with a splunk query to return events where an array of object contains certain value for a key in all the objects of an array Event 1: { list: [ {"name": "Hello", "type": "code"}, {"name": "Hello", "type": "document"} ] } Event 2: { list: [ {"name": "Hello", "type": "code"}, {"name": "World", "type": "document"} ] } Event 3: { list: [ {"name": "Hello", "type": "document"}, {"name": "Hello", "type": "document"} ] } filters: In the list array, the first object in an array should have "type": "code" In all the items in the list array should have "name": "Hello" Expected output: In the above list of events the query should return 'Event 1', where first item - list[0].type = code and list has all the items with "name": "Hello" I tried multiple ways like search list{}.name="Hello" This was returning the events which had atleast 1 element having name: Hello However i was able to achieve checking for 1st filter as below | eval conflict = mvindex(list, 0) | spath input=conflict | search type=code If someone can help in achieving both the filters in a query that will be helpful. Thanks in advance  
Hello Splunkers, I'm encountering an issue with data model acceleration in my ES instance . A few weeks ago, I enabled several data models in my ES instance to support correlation searches. However,... See more...
Hello Splunkers, I'm encountering an issue with data model acceleration in my ES instance . A few weeks ago, I enabled several data models in my ES instance to support correlation searches. However, I recently noticed that there hasn't been any increase in SVC usage, and upon checking today, I found that the acceleration status for these models was disabled. I'm puzzled by this and would appreciate any insights into why this occurred and how to identify the root cause. Thank you.
Hi, I wanted to create a table as below. I am extracting Status and Reason using rex. How can I create this. Count column should count the events- I used stats count by ..  
Hello, So I have the following issue... Let's say I have a Splunk table, where is a rename on the end. The tokens can have different value, so the final header column is dynamic, as it depends on t... See more...
Hello, So I have the following issue... Let's say I have a Splunk table, where is a rename on the end. The tokens can have different value, so the final header column is dynamic, as it depends on the token.   | table 1_aaa, 1_bbb, 1_ccc, 2_aaa, 2_bbb, 2_ccc, 3_aaa, 3_bbb, 3_ccc | rename 1_aaa as "1. $aaa$", 1_bbb as "1. $bbb$", 1_ccc as "1. $ccc$", 2_aaa as "2. $aaa$", 2_bbb as "2. $bbb$", 2_ccc as "2. $ccc$", 3_aaa as "3. $aaa$", 3_bbb as "3. $bbb$", 3_ccc as "3. $ccc$"   The formatting is working properly:   <format type="color" field="1. $aaa$"> <colorPalette type="list">[#5b708f]</colorPalette> </format>   But the drilldown not. I tried the below conditions, but without success.   <drilldown> <condition match="$click.name2$ = 1. $aaa$"> <condition match="$click.name2$ = &quot;1. $aaa$&quot;"> <condition match="$click.name2$ = &quot;1. &quot;$aaa$"> <condition match="match('click.name2', 1. $aaa$)"> <condition match="match('click.name2', &quot;1. $aaa$&quot;)"> <condition match="match('click.name2', &quot;1. &quot;$aaa$)"> <condition match="match('click.name2', '1. $aaa$')">   Is there a way to do it somehow with such a combination? P.S.: As a possible workaround, without a combination of string&token it works properly, but I rather go without it as then I unnecessarily need to create a separate token for each column:   <set token="1_aaa">1. $result.aaa$</set> <set token="1_bbb">1. $result.bbb$</set> <set token="1_ccc">1. $result.ccc$</set> ... | table 1_aaa, 1_bbb, 1_ccc, 2_aaa, 2_bbb, 2_ccc | rename 1_aaa as "$1_aaa$", 1_bbb as "$1_bbb$", 1_ccc as "$1_ccc$", 2_aaa as "$2_aaa$", 2_bbb as "$2_bbb$", 2_ccc as "$2_ccc$", 3_aaa as "$3_aaa$", 3_bbb as "$3_bbb$", 3_ccc as "$3_ccc$" ... <format type="color" field="$1_aaa$"> <colorPalette type="list">[#5b708f]</colorPalette> </format> ... <drilldown> <condition match="$click.name2$ = $1_aaa$">  
I have OS log data coming from Windows/linux into splunk. I have a particular field with values unseparated.  Sample log data representation. _time parameter value x a c b x1 x a c... See more...
I have OS log data coming from Windows/linux into splunk. I have a particular field with values unseparated.  Sample log data representation. _time parameter value x a c b x1 x a c b x2 x a c b x3 y d e y1 y d e y2   I would want to splint the parameter field's values in such a way that each parameter field will have one of the group values, in same order. Sample output :  _time parameter value x a x1 x c x2 x b x3 y d y1 y e y2   Can someone please help?
Is there currently a capability in Splunk that will allow us search and compare the previous version of an input lookup to the current version of the input lookup to identify what has changed between... See more...
Is there currently a capability in Splunk that will allow us search and compare the previous version of an input lookup to the current version of the input lookup to identify what has changed between the two?  In search is there a parameter we can pass the  input lookup command to specify the version what we want to evaluate?  
I have an odd task I'm trying to fulfill and I'm not entirely sure how to go about it.  We have a print server that forwards logs to Splunk. We also have multiple printers that are on a separate VLA... See more...
I have an odd task I'm trying to fulfill and I'm not entirely sure how to go about it.  We have a print server that forwards logs to Splunk. We also have multiple printers that are on a separate VLAN that only the print server can see. The objective is to see if we can pull the logs directly from the printer and forward them to Splunk. From what I've been reading, this should be possible by setting up the print server as a sort of intermediate forwarder? I believe the process is to have the printers redirect their logs to the print server to a specific folder, then add that folder to the list of logs being reported in the Splunk forwarder. Does that sound correct? Has anyone done this before? Any instructions that could make this easier? I'm fairly new to Splunk and I'm still learning how to set things up so as many details as possible would be helpful.   Thanks.
Hi Team, we facing issue in our environment, as Episodes are not generating after upgrading of ITSI 4.17.1 version. And everything is enabled (Services, Correlation searches, NEAP policies) and... See more...
Hi Team, we facing issue in our environment, as Episodes are not generating after upgrading of ITSI 4.17.1 version. And everything is enabled (Services, Correlation searches, NEAP policies) and even though it is not creating. ITSI_event_grouping is enabled and Rules engine is working fine. Please provide solution for this.
When I do this search: index="mydata" | eval mymean=avg(floatnumbers) | table floatnumbers,mymean mymean just mimics whatever is in floatnumbers. How do I calculate the mean? I have tried the fie... See more...
When I do this search: index="mydata" | eval mymean=avg(floatnumbers) | table floatnumbers,mymean mymean just mimics whatever is in floatnumbers. How do I calculate the mean? I have tried the fieldsummary command, but when I did that, it would not port to chart correctly.
Hello team, I am facing an issue with multiple events getting merged as a single event in tier 3. I do not have this issue with tier 1 or when I manually run the saved search. However when the save... See more...
Hello team, I am facing an issue with multiple events getting merged as a single event in tier 3. I do not have this issue with tier 1 or when I manually run the saved search. However when the saved search runs at a scheduled time these multiple events gets merged as 1 single event. I even tried adding the below values in props.conf of Data App but did not help [sourcetype::_json] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)\d{2}\/\d{2}\/\d{4}\s\d{2}:\d{2}:\d{2}\s\+\d{4}   Below is how the event in tier 3 is like: 03/28/2024 10:35:00 +0000,search_now=1711622100.000000000,source_host="1.1.1.1 : ip-sample_ip.ec2.internal",metric_label="Port_Connectivity : Reporting no data",instance="Port : 45",metric_value="0",metric_unit="latest",alert_value="100",tower="Port reporting no data",threshold1="-2",threshold2="-1",threshold3="0.5",threshold4="0.5",blacklist_alerts="1",add_info="Time=1711622100.000000;!@#;state=offline;!@#;message=NA;!@#;protocol=NA;!@#;responsetext=NA;!@#;responsetime=1711622100.000000;!@#;returncode=NA;!@#;roundtriptime=NULL;!@#;service_name=NA;!@#;app_context=port_data"03/28/2024 10:35:00 +0000,search_now=1711622100.000000000,source_host="1.1.1.1 : ip-sample_ip.ec2.internal",metric_label="Port_Monitoring : Port_Status",instance="Port : 45",metric_value="201",metric_unit="Status",alert_value="100",tower="Infra",threshold1="0",threshold2="0",threshold3="300",threshold4="500",blacklist_alerts="1",add_info="Time=2024-03-28T10:33:48Z;!@#;state=reachable;!@#;message=reachable;!@#;protocol=UDP;!@#;responsetext=/bin/sh: line 1: nc: command not found;!@#;responsetime=na;!@#;returncode=0;!@#;roundtriptime=NULL;!@#;service_name=IMP;!@#;app_context=port_data"03/28/2024 10:35:00 +0000,search_now=1711622100.000000000,source_host="127.0.0.1 : ip-sample_ip.ec2.internal",metric_label="Port_Connectivity : Reporting no data",instance="Port : 3389",metric_value="0",metric_unit="latest",alert_value="100",tower="Port reporting no data",threshold1="-2",threshold2="-1",threshold3="0.5",threshold4="0.5",blacklist_alerts="1",add_info="Time=1711622100.000000;!@#;state=offline;!@#;message=NA;!@#;protocol=NA;!@#;responsetext=NA;!@#;responsetime=1711622100.000000;!@#;returncode=NA;!@#;roundtriptime=NULL;!@#;service_name=NA;!@#;app_context=port_data" Every event will end at "app_context=port_data"" to be exact. Please let me know how to resolve this.
Hi Team, The below is the event which we have received into the splunk, Dataframe row : {"_c0":{"0":"{","1":" \"0\": {","2":" \"jobname\": \"A001_GVE_ADHOC_AUDIT\"","3":" \"status\": \"ENDED NOTOK\... See more...
Hi Team, The below is the event which we have received into the splunk, Dataframe row : {"_c0":{"0":"{","1":" \"0\": {","2":" \"jobname\": \"A001_GVE_ADHOC_AUDIT\"","3":" \"status\": \"ENDED NOTOK\"","4":" \"Timestamp\": \"20240317 13:25:23\"","5":" }","6":" \"1\": {","7":" \"jobname\": \"BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS\"","8":" \"status\": \"ENDED NOTOK\"","9":" \"Timestamp\": \"20240317 13:25:23\"","10":" }","11":" \"2\": {","12":" \"jobname\": \"BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS_WEEKLY\"","13":" \"status\": \"ENDED NOTOK\"","14":" \"Timestamp\": \"20240317 13:25:23\"","15":" }","16":" \"3\": {","17":" \"jobname\": \"D001_GVE_SOFT_MATCHING_GDH_CA\"","18":" \"status\": \"ENDED NOTOK\"","19":" \"Timestamp\": \"20240317 13:25:23\"","20":" }","21":" \"4\": {","22":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TRX_ORG\"","23":" \"status\": \"ENDED NOTOK\"","24":" \"Timestamp\": \"20240317 13:25:23\"","25":" }","26":" \"5\": {","27":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_123\"","28":" \"status\": \"ENDED NOTOK\"","29":" \"Timestamp\": \"20240317 13:25:23\"","30":" }","31":" \"6\": {","32":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_45\"","33":" \"status\": \"ENDED OK\"","34":" \"Timestamp\": \"20240317 13:25:23\"","35":" }","36":" \"7\": {","37":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_ENPW\"","38":" \"status\": \"ENDED NOTOK\"","39":" \"Timestamp\": \"20240317 13:25:23\"","40":" }","41":" \"8\": {","42":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_T\"","43":" \"status\": \"ENDED NOTOK\"","44":" \"Timestamp\": \"20240317 13:25:23\"","45":" }","46":" \"9\": {","47":" \"jobname\": \"DREAMPC_CALC_ML_NAMESAPCE\"","48":" \"status\": \"ENDED NOTOK\"","49":" \"Timestamp\": \"20240317 13:25:23\"","50":" }","51":" \"10\": {","52":" \"jobname\": \"DREAMPC_MEMORY_AlERT_SIT\"","53":" \"status\": \"ENDED NOTOK\"","54":" \"Timestamp\": \"20240317 13:25:23\"","55":" }","56":" \"11\": {","57":" \"jobname\": \"DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS\"","58":" \"status\": \"ENDED NOTOK\"","59":" \"Timestamp\": \"20240317 13:25:23\"","60":" }","61":" \"12\": {","62":" \"jobname\": \"DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\"","63":" \"status\": \"ENDED NOTOK\"","64":" \"Timestamp\": \"20240317 13:25:23\"","65":" }","66":" \"13\": {","67":" \"jobname\": \"DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS\"","68":" \"status\": \"ENDED OK\"","69":" \"Timestamp\": \"20240317 13:25:23\"","70":" }","71":" \"14\": {","72":" \"jobname\": \"DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\"","73":" \"status\": \"ENDED OK\"","74":" \"Timestamp\": \"20240317 13:25:23\"","75":" }","76":" \"15\": {","77":" \"jobname\": \"DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS\"","78":" \"status\": \"ENDED OK\"","79":" \"Timestamp\": \"20240317 13:25:23\"","80":" }","81":" \"16\": {","82":" \"jobname\": \"DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\"","83":" \"status\": \"ENDED OK\"","84":" \"Timestamp\": \"20240317 13:25:23\"","85":" }","86":" \"17\": {","87":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH\"","88":" \"status\": \"ENDED OK\"","89":" \"Timestamp\": \"20240317 13:25:23\"","90":" }","91":" \"18\": {","92":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH_WEEKLY\"","93":" \"status\": \"ENDED OK\"","94":" \"Timestamp\": \"20240317 13:25:23\"","95":" }","96":" \"19\": {","97":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_SAMCONTDEPOT\"","98":" \"status\": \"ENDED NOTOK\"","99":" \"Timestamp\": \"20240317 13:25:23\"","100":" }","101":" \"20\": {","102":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TLXLSP_TRXN\"","103":" \"status\": \"ENDED NOTOK\"","104":" \"Timestamp\": \"20240317 13:25:23\"","105":" }","106":" \"21\": {","107":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR\"","108":" \"status\": \"ENDED OK\"","109":" \"Timestamp\": \"20240317 13:25:23\"","110":" }","111":" \"22\": {","112":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR_WEEKLY\"","113":" \"status\": \"ENDED OK\"","114":" \"Timestamp\": \"20240317 13:25:23\"","115":" }","116":" \"23\": {","117":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON\"","118":" \"status\": \"ENDED NOTOK\"","119":" \"Timestamp\": \"20240317 13:25:23\"","120":" }","121":" \"24\": {","122":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON_WEEKLY\"","123":" \"status\": \"ENDED OK\"","124":" \"Timestamp\": \"20240317 13:25:23\"","125":" }","126":" \"25\": {","127":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI\"","128":" \"status\": \"ENDED NOTOK\"","129":" \"Timestamp\": \"20240317 13:25:23\"","130":" }","131":" \"26\": {","132":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI_WEEKLY\"","133":" \"status\": \"ENDED NOTOK\"","134":" \"Timestamp\": \"20240317 13:25:23\"","135":" }" we have tried to extract the required fields such as Timestamp, Jobname, Status from the above events using the below splunk query index=app_events_dwh2_de_int _raw=*jobname* | rex max_match=0 "\\\\\\\\\\\\\"jobname\\\\\\\\\\\\\":\s*\\\\\\\\\\\\\"(?<Name>[^\\\]+)" | rex max_match=0 "\\\\\\\\\\\\\"status\\\\\\\\\\\\\":\s*\\\\\\\\\\\\\"(?<State>[^\\\]+)" | rex max_match=0 "Timestamp\\\\\\\\\\\\\": \\\\\\\\\\\\\"(?<TIME>\d+\s*\d+\:\d+\:\d+)" | rex max_match=0 "execution_time_in_seconds\\\\\\\\\\\\\": \\\\\\\\\\\\\"(?<EXECUTION_TIME>[\d\.\-]+)" | table "TIME", "Name", "State", "EXECUTION_TIME" | mvexpand TIME But the issue we want to extract only those status jobs with status as " ENDED NOTOK". But we are unable to extract them. Also when we use mvexpand command for the table, it is showing multiple duplicate values.   We request you to kindly look into this and help us on this issue.
I have been using this script to update many of our lookups/datasets but it's no longer working, giving the following error when downloading the file: "status 403, reason: Forbidden" It was working... See more...
I have been using this script to update many of our lookups/datasets but it's no longer working, giving the following error when downloading the file: "status 403, reason: Forbidden" It was working last week but it suddenly stop. The user I'm using is the owner of these lookups/datasets and no permissions were changed. The Splunk instance is currently having a issue with a expired certificate, could it be because of that? Or something else?
Hello, I have this really weird problem I've been trying to figure out for the past 2 days without success. Basically I have a Splunk architecture where I want to put the deployment server (DS) on t... See more...
Hello, I have this really weird problem I've been trying to figure out for the past 2 days without success. Basically I have a Splunk architecture where I want to put the deployment server (DS) on the heavy forwarder since I don't have a lot of clients and it's just a lab. The problem is as follows : With a fresh Splunk Enterprise instance that is going to be the heavy forwarder, when I set up the client by putting in the deploymentclient.conf  the IP address of the heavy forwarder and port, it first works as intended and I can see the client in Forwarder Management. As soon as I enable forwarding on the Heavy Forwarder and put the IP addresses of the Indexers, the client doesn't show up on the Heavy Forwarder Management panel anymore but shows up in every other instance's Forwarder Management panel (Manager node, indexers etc..) ???? It's as if the heavy forwarder is forwarding the deployment client to all instances apart the heavy forwarder itself. Thanks in advance