All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @leobsksd, do you want to migrate only configurations or also data? For configurations, you have to copy from the old instance to the new one all the apps you installed and eventually also searc... See more...
Hi @leobsksd, do you want to migrate only configurations or also data? For configurations, you have to copy from the old instance to the new one all the apps you installed and eventually also search and launcer, if you have some configuration in these apps. if you want also migrate data, you have to: stop bothe the instances, copy the $SPLUNK_HOME\var\lib\splunk folder (or a different one if you used a different $SPLUNK_DB) from the old to the new instance, copy all the indexes.conf files from the old to the new instance, restart Splunk in the new instance. Only for conclusion: Windows is ok for quick tests or demo, avoid for production systems, use Linux! Ciao. Giuseppe
Hi @hinako, where does the csv file come from? if it's in a server or pc folder, you can put it in a folder, read it and override the lookup with a simple scheduled search: | inputcsv <your_csv_fi... See more...
Hi @hinako, where does the csv file come from? if it's in a server or pc folder, you can put it in a folder, read it and override the lookup with a simple scheduled search: | inputcsv <your_csv_file> | outputcsv <your-lookup> eventually adding some check if the csv file is empty or missing. Ciao. Giuseppe
Hi @shakti, which version of Splunk and of the App are you using, and from what are you migrating? the last apps version (4.1.1) is compatible with all the Splunk versions. Anyway this app isn't s... See more...
Hi @shakti, which version of Splunk and of the App are you using, and from what are you migrating? the last apps version (4.1.1) is compatible with all the Splunk versions. Anyway this app isn't supported. Inthis app there are many python 3 scripts, maybe the old one is old and must be upgraded. Ciao. Giuseppe
Hi Team I have the below Json string coming as an event in Splunk logs . after data, the next field could be a, b, c, d  I want to read the x and y fields, How to write a single spath query like  ... See more...
Hi Team I have the below Json string coming as an event in Splunk logs . after data, the next field could be a, b, c, d  I want to read the x and y fields, How to write a single spath query like  | spath input=inputJson  path="data.{*}.x"      {data : {a : { x: { } y: { }}} } {data : {b : { x: { } y: { }}} } {data : {c : { x: { } y: { }}} } {data : {d : { x: { } y: { }}} }      
I don't have access to Splunk Indexer and have access only to target servers. That's why I am asking for a way to build dashboard from target server shell script output data.
Hello @yuanliu  What you wrote is similar to my situation.  I solve this problem using different way (Notice that this wasn't the way to go). But your answer made me aware of factors to think abou... See more...
Hello @yuanliu  What you wrote is similar to my situation.  I solve this problem using different way (Notice that this wasn't the way to go). But your answer made me aware of factors to think about. Thank you for your helping!
Hello Experts, Just want to have clarity on below points. 1. Is AppD Db agent capable of detecting Ora errors in Oracle DB? 2.If yes whether cen we detect ORA-00600 error via AppD Db agent? Please... See more...
Hello Experts, Just want to have clarity on below points. 1. Is AppD Db agent capable of detecting Ora errors in Oracle DB? 2.If yes whether cen we detect ORA-00600 error via AppD Db agent? Please let us know the process for the same.
Hello,  I need to exclude and prevent the ingestion of data when these events occur. Im using the TA_Linux and this event is the /var/log/audit/audit.log Can you help me? node=MXSPL1VMV803 type=S... See more...
Hello,  I need to exclude and prevent the ingestion of data when these events occur. Im using the TA_Linux and this event is the /var/log/audit/audit.log Can you help me? node=MXSPL1VMV803 type=SYSCALL msg=audit(1707180153.753:128962293): arch=c000003e syscall=87 success=yes exit=0 a0=7fb15c2fae20 a1=7fb0ea759e80 a2=7fb15c2fae20 a3=7fb1c0097b71 items=2 ppid=1 pid=1990 auid=3001 uid=3001 gid=3001 euid=3001 suid=3001 fsuid=3001 egid=3001 sgid=3001 fsgid=3001 tty=(none) ses=1 comm="elasticsearch[n" exe="/etc/elasticsearch/opendistroforelasticsearch/jdk/bin/java" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="delete-successful"   Regards
The Config page in my TA-dmarc app is not loading after migration...
Hi Ryan,  I would like to know what specifically do I have to do in order to excede the page excluding limit for Ajax on a SAAS controller  The reason why I am asking this is because I am recieving... See more...
Hi Ryan,  I would like to know what specifically do I have to do in order to excede the page excluding limit for Ajax on a SAAS controller  The reason why I am asking this is because I am recieving a error message on the controller that states Failed to exclude the Request. I have excluded Ajax Page requests before however I am not quite able to exclude anymore  on the same controller Regards, Shashwat 
Hi all, I need to clarify the correlation searches within SOAR. Is there any way to identify them?
@richgalloway , Hi When I manually execute the search, I noticed that by excluding the last line from the search query, I am able to visualize the critical events successfully. Nevertheless, despit... See more...
@richgalloway , Hi When I manually execute the search, I noticed that by excluding the last line from the search query, I am able to visualize the critical events successfully. Nevertheless, despite this observation, it's worth noting that there are no alerts appearing in the incident review panel dashboard.
Hi, I want to refresh a lookup file daily. How do I do this? My file type is csv and in a file server. Thanks,
Use the addcoltotals command to sum the values and put them into the location field as "ABC". ... | addcoltotals labelfield=location label="ABC"
recently upgraded my splunk HF with Splunk enterprise 9.1 version. Also upgraded Splunk TA add-on for New Relic. Privious TA version 2.1.0 New TA version 2.1.6 after the upgrade TA is not able to ... See more...
recently upgraded my splunk HF with Splunk enterprise 9.1 version. Also upgraded Splunk TA add-on for New Relic. Privious TA version 2.1.0 New TA version 2.1.6 after the upgrade TA is not able to make api call to new relic and failing with error invalid api key. I confirmed the api key is correct and I am able to call it feom other client.
I found this:  Migrate a Splunk Enterprise instance from one physical machine to another - Splunk Documentation   I will give it a try. Leo
Error rate and Target - need to display Target number for latest week. only Hi, I have results for Error rate and Target for last 12 weeks and in visualization Target numbers are interfering wit... See more...
Error rate and Target - need to display Target number for latest week. only Hi, I have results for Error rate and Target for last 12 weeks and in visualization Target numbers are interfering with the error rate in the graph above.  any way to project Target for only latest week from 12 weeks data and project the green line for 12 weeks ? so, it wont interfere with the error rate numbers. Splunk quire below.  index=equipment_error reporttype=p_scada description="No case found with the expected dimensions" OR description="Flight Path Occupied" OR description="Place Position Occupied" OR description="Tray pattern does not comply" AND mark_code=TPO earliest=-12w@w1 latest=-0@w1 | eval APAL=substr(isc_id,2,2) | append [| search index=internal_statistics_1h earliest=-12w@w1 latest=-0w@w1 [| inputlookup internal_statistics | where report="Throughput" AND level="step" AND step="Pallet building" AND measurement IN("Case") | fields id | rename id AS statistic_id] | eval value=coalesce(value,sum_value) | fields statistic_id value group_name location | eval _virtual_=if(isnull(virtual),"N","Y"),_cd_=replace(_cd, ".*:", "") | sort 0 -_time _virtual_ -"_indextime" -_cd_ | dedup statistic_id _time group_name | fields - _virtual_ _cd_ | lookup internal_statistics id AS statistic_id OUTPUTNEW report level step measurement | eval location=substr(location,12) , location="CaseQty".location | timechart span=1w@1 sum(value) BY location limit=0 | addtotals] | timechart span=1w@1 count(isc_id) as ErrorQty sum(Total) as CaseQty values(mark_code) as mark_code | eval ErrorRate=round((ErrorQty/CaseQty)*10000,1) | fillnull value=0 | eval Target="5" | table _time ErrorRate Target | where ErrorRate>0.001 Appreciate help and Thanks in Advance     
I want to query the user dataset using the from datamodel command. I know how to use nodename in the tstat command. When I run SPL as shown below, an error appears. | from datamodel: test_01... See more...
I want to query the user dataset using the from datamodel command. I know how to use nodename in the tstat command. When I run SPL as shown below, an error appears. | from datamodel: test_01.evtid.user If you know how, please reply.
Hi Splunk experts, I’m a Splunk beginner. I need help with a requirement. I have fields named 'location,' 'login,' and 'desk' with the following values:   location  login  desk AA             1  ... See more...
Hi Splunk experts, I’m a Splunk beginner. I need help with a requirement. I have fields named 'location,' 'login,' and 'desk' with the following values:   location  login  desk AA             1       0 BB             1       0 CC             0       10 DD             1       1 EE             0       1     My goal is to create a new location called 'ABC,' which should be the sum of all four locations (AA, BB, CC, DD). I've tried the following search, but it's not summing up all four locations:   | appendpipe [search AA BB CC DD | eval location=“ABC”] | stats sum(login) as login by desk   Please guide me on how to achieve this. Thank you.    
You'd have to restart the forwarder service after logrotate. (Because I assume that's what you're using). Just like normally you kill -HUP your syslog daemon.