All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Dears, I'm trying to install Event Service 23.7 with Elastic Search 8  the scenario: 1- adding host by user root => and this is only applicable by the root user as I tried to make it by another us... See more...
Dears, I'm trying to install Event Service 23.7 with Elastic Search 8  the scenario: 1- adding host by user root => and this is only applicable by the root user as I tried to make it by another user and go with SSH passwordless but it didn't work. ##if you advise adding hosts by different user okay I will share the error then for this.  2- after adding host by root, I go to events service            a- The default installation directory is /opt/appdynamics/platform/product/events-service/data and I change                       it to be /home/appd-user/appdynamics-event-service-path/ in this way I get error                    #Unable to check the health of Events Service hosts [events-03, events-02, events-01] through port 9080                       or it runs and then stops in seconds.                   #and in logs elastic search stopped => cause running by root user                   #my question here is also, why does the installation of events service in each service go under                                                     /opt/appdynamics to solve this issue I have added the hosts with appd user but in each event service server  sudo chown -R appd:appd /opt/ sudo chmod -R u+w /opt/ the adding host went well and the installation of the event service went well but some of the event services stopped with an error elastic search running with the root user even it's run with appd user. and to solve this issue I have run manually from each server  bin/events-service.sh stop -f && rm -r events-service-api-store.id && rm -r elasticsearch.id nohup bin/events-service.sh start -p conf/events-service-api-store.properties & but anyway, I see all of this is work around and I want something more stable and I'm sure someone will guide me with the right steps.  BR Abdulrahman Kazamel
index=foo sourcetype=json_foo source="az-foo" |rename tags.envi as env |search env="*A00001*" OR env="*A00002*" OR env="*A00005*" OR env="*A00020*" |stats count by env |eval env=case(match(env,"A0000... See more...
index=foo sourcetype=json_foo source="az-foo" |rename tags.envi as env |search env="*A00001*" OR env="*A00002*" OR env="*A00005*" OR env="*A00020*" |stats count by env |eval env=case(match(env,"A00001"),"PBC",match(env,"A00002"),"PBC",match(env,"A00005"),"KCG",match(env,"A00020"),"TTK",true(),env)
Hi @Mr_Sneed , good for you, see next time! let us know if we can help you more, or, please, accept one answer (eventually the your one) for the other people of Community. Ciao and happy splunking... See more...
Hi @Mr_Sneed , good for you, see next time! let us know if we can help you more, or, please, accept one answer (eventually the your one) for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
in splunk.log I had an interesting log that mentioned something about the hostname and not being able to resolve it. I changed the hostname and everything works. Thanks for the help
Hi @Mr_Sneed , as you can read at https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Deploymentclientconf , you have to insert in your deploymentclient.conf: [target-broker:deploymentServer] ... See more...
Hi @Mr_Sneed , as you can read at https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Deploymentclientconf , you have to insert in your deploymentclient.conf: [target-broker:deploymentServer] targetUri = 10.1.10.69:8089 that's the output of the "splunk set deploy-poll" command, not other. Then you should check (using telnet if the route on port 8089 between the client and the Deployment Server is open. Ciao. Giuseppe
I am running query ->  index=* source="/somesource/*" message "403" | search level IN (ERROR) And Response is --> { "instant": { "epochSecond": 1707978481, "nanoOfSecond": 72000000 }, "threa... See more...
I am running query ->  index=* source="/somesource/*" message "403" | search level IN (ERROR) And Response is --> { "instant": { "epochSecond": 1707978481, "nanoOfSecond": 72000000 }, "thread": "main", "level": "ERROR", "message": "Error while creating user group", "thrown": { "commonElementCount": 0, "extendedStackTrace": "403 Forbidden:" }, "endOfBatch": false, "threadId": 1, "threadPriority": 5, "timestamp": "2024-02-15T06:28:01.072+0000" } Now, when i ran following query -> index=* source="/somesource/*" message "403" | search level IN (ERROR) | eval Test=substr(message,1,5) | eval Test1=substr(thrown.extendedStackTrace, 1, 3) | table Test, Test1 I am getting value for Test. Correct substring occuring (Output is Error). But for Test1, its empty string, where as I am expecting 403. As message is on root, its working, but the extendedStackTrace is under thrown, the thrown.extendedStackTrace is not rending the correct result. Although, if i do ...| table Test, Test1, thrown.extendedStackTrace There is a proper value coming in for thrown.extendedStackTrace What am i missing?
Hi @HPACHPANDE , you have to save the results of your search in a lookup using the outputlookup command (https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/Outputlookup). the fields... See more...
Hi @HPACHPANDE , you have to save the results of your search in a lookup using the outputlookup command (https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/Outputlookup). the fields to save in the lookup depends on your search. Ciao. Giuseppe
Tested example for one-shot uses to rebuild all thawed buckets for a single index, which also includes the index name as required by newer versions:   ls -dA /data/idx/hot/splunk/{index-name}/thawe... See more...
Tested example for one-shot uses to rebuild all thawed buckets for a single index, which also includes the index name as required by newer versions:   ls -dA /data/idx/hot/splunk/{index-name}/thaweddb/db_* | xargs -I BUCKET --max-procs=10 /opt/splunk/bin/splunk rebuild BUCKET {index-name}  
Hello Team, Required help regarding below points : 1] how to add entry of  the ran search with the fields Host, SourceIP and DestinationIP into lookup table. 2] how to add entry into lookup tabl... See more...
Hello Team, Required help regarding below points : 1] how to add entry of  the ran search with the fields Host, SourceIP and DestinationIP into lookup table. 2] how to add entry into lookup table from the notable triggered or contributing events of the notable.   Requirement here is that need to create co relation rule from the lookup values which will be taking from previously triggered notables.
The emulation works wonderfully when doing it my test environment, however when doing the emulation in the search head, the "INTERESTING FIELDS" field names and their values are overriding the extrac... See more...
The emulation works wonderfully when doing it my test environment, however when doing the emulation in the search head, the "INTERESTING FIELDS" field names and their values are overriding the extracted values: host Component Value _time   F_Type_1 F_Type_1_Section_5_Value 2024-02-14 21:28:25   F_Type_1 F_Type_1_Section_5_Value 2024-02-14 21:28:25   F_Type_1 F_Type_1_Section_5_Value 2024-02-14 21:28:25   So I had to remove the auto-extracted field at the beginning Here is the final emulation in live data: | fields - Section_5 | dedup host | eval data = split(_raw, " ") | eval data = mvfilter(match(data, "^Component=")) | mvexpand data | rename data AS _raw | extract pairdelim=",", kvdelim="=" | rename Section_5 AS Value | table host Component Value _time   Thank you so much for your help!  
Due to some issue with proper cleanup of idle processes, number of python process ( appserver.py) running on the system constantly grow. Thus due to  systemwide memory growth,  these stale processes,... See more...
Due to some issue with proper cleanup of idle processes, number of python process ( appserver.py) running on the system constantly grow. Thus due to  systemwide memory growth,  these stale processes, eventually causes an OOM. Run following search to find if any search head is impacted by this issue and what % of total system memory these stale processes running more than 24 hours. If these processes using more than 15% of total system memory, then run script to kill stales processes.   index=_introspection host=<all search heads> appserver.py data.elapsed > 86400 | dedup host, data.pid | stats dc(data.pid) as cnt sum("data.pct_memory") AS appserver_memory_used by host | sort - appserver_memory_used   On linux/unix you can use following script to kill stale processes and reclaim memory.   kill -TERM $(ps -eo etimes,pid,cmd | awk '{if ( $1 >= 86400) print $2 " " $4 }' |grep appserver.py | awk '{print $1}')  
Search Head appears to have a rogue python  process ( appserver.py) that slowly eats away all memory on the system, then eventually causes an OOM, which requires a manual restart of splunkd, then the... See more...
Search Head appears to have a rogue python  process ( appserver.py) that slowly eats away all memory on the system, then eventually causes an OOM, which requires a manual restart of splunkd, then the issue starts slowly creeping up to happen again.
Were you able to find a way to resolve this issue? We're seeing the same thing, complete with the same error message in log.log.  For future users, the way to get around SSO if the setup fails is ... See more...
Were you able to find a way to resolve this issue? We're seeing the same thing, complete with the same error message in log.log.  For future users, the way to get around SSO if the setup fails is to append ?loginType=uba to the end of your login (https://example.com/?loginType=uba)
It would help to know what you've already tried.  Have you looked at it the other way, that is, what sources do each DM use? | tstats count from datamodel=<DM name> by source You  should be able to... See more...
It would help to know what you've already tried.  Have you looked at it the other way, that is, what sources do each DM use? | tstats count from datamodel=<DM name> by source You  should be able to combine that with | rest /services/data/models and map to list the sources used by all datamodels.
Hi just send all events to SCP as in any UF. If you need also dbxquery on SCP then you must install DB Connect and needed dbdrivers also into it and of course open FW from SCP search head(s). r. I... See more...
Hi just send all events to SCP as in any UF. If you need also dbxquery on SCP then you must install DB Connect and needed dbdrivers also into it and of course open FW from SCP search head(s). r. Ismo
Multi-line explains why default Component and Section_5 do not contain all data.  Do not worry about props.conf, then.  This is what you can do:   | sort host _time | eval data = split(_raw, " ") |... See more...
Multi-line explains why default Component and Section_5 do not contain all data.  Do not worry about props.conf, then.  This is what you can do:   | sort host _time | eval data = split(_raw, " ") | eval data = mvfilter(match(data, "^Component=")) | mvexpand data | rename data AS _raw | extract | rename Section_5 AS Value | table host Component Value _time   This is an emulation you can play with and compare with real data   | makeresults | eval _raw="TimeStamp Component=F_Type_1,.....,Section_5=F_Type_1_Section_5_Value Component=F_Type_2,.....,Section_5=F_Type_2_Section_5_Value Component=F_Type_3,.....,Section_5=F_Type_3_Section_5_Value" ``` data emulation above ```   The output is then host Component Value _time   F_Type_1 F_Type_1_Section_5_Value 2024-02-14 21:28:25   F_Type_2 F_Type_2_Section_5_Value 2024-02-14 21:28:25   F_Type_3 F_Type_3_Section_5_Value 2024-02-14 21:28:25 Hope this helps
Are there any messages in the forwarder's splunkd.log that might explain what is happening?  Look for "DC:" in the log.
thank you so much, 
Hi @Geoff.Wild, Thanks for following up with the solution! 
@Ryan.Paredez you were close. Here's the final solution: Install a second DBAgent on the connector server using the latest from AppD: Download db-agent-24.1.0.3704.zip mkdir /opt/appdynamic... See more...
@Ryan.Paredez you were close. Here's the final solution: Install a second DBAgent on the connector server using the latest from AppD: Download db-agent-24.1.0.3704.zip mkdir /opt/appdynamics/dbagent1 Unzip the agent there. Replace the lib/mysql-connector-java-8.0.27.jar  with mysql-connector-java-commercial-5.1.13.jar (this one works with older MySQL DB's) Change the filename to:  mysql-connector-java-8.0.27.jar Make sure the agent has a different name than the current one by updating the startup options in the start-dbagent script like so: exec "$JAVACMD" "${JVM_OPTS[@]}" -Ddbagent.name="OldMysql-DB-Agent1" -jar "${APP_HOME}/db-agent.jar" Update the conf/controller-info.xml as needed. Start the agent. In AppD GUI, update the collector to use the  new agent, OldMysql-DB-Agent1 (or what ever you named it).