All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team, We need your help in reviewing Data Collectors created in AppDynamics for .NET WCF service. We need your guidance as newly Data Collectors are not working. Thank you, Deepak Paste
Hi, I am trying to upload elastic log file to splunk this is an example of one entry in a long log: {"_index":"index-00","_type":"_doc","_id":"TyC0RIkBQC0jFzdXd-XG","_score":1,"_source":"{"somethi... See more...
Hi, I am trying to upload elastic log file to splunk this is an example of one entry in a long log: {"_index":"index-00","_type":"_doc","_id":"TyC0RIkBQC0jFzdXd-XG","_score":1,"_source":"{"something_long":"long json"}\n","stream":"stderr","docker":{"container_id":"d48887cdb80442f483a876b9f2cd351ae02a8712ec20960a9dc66559b8ccce87"},"kubernetes":{"container_name":"container","namespace_name":"namespace","pod_name":"service-576c4bcccf-75gzq","container_image":"art.com:6500/3rdparties/something/something-agent:1.6.0","container_image_id":"docker-pullable://art.com:6500/3rdparties/something/something-agent@sha256:02b855e32321c55ffb1b8fefc68b3beb6","pod_id":"3c90db56-3013a73e5","host":"worker-3","labels":{"app":"image-service","pod-template-hash":"576c4bcccf","role":"image-ervice"}},"level":"info","ts":1689074778.913063,"caller":"peermgr/peer_mgr.go:157","msg":"Not enough connected peers","connected":0,"required":1,"@timestamp":"2023-07-11T11:26:19.133326179+00:00"}} As you can see the timestamp is at the end. So I have setup my props.conf for the following: [elastic_logs] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom description = make sure timestamp is taken pulldown_type = 1 TIME_PREFIX = "@timestamp":\s*" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%z MAX_TIMESTAMP_LOOKAHEAD = 1000   I can see the timestamp in splunk entries, but that is all I can see now. all the other fields are not displayed. what am I doing wrong?
Hello, We have few apps that are no longer needed in our on premise environment. We maintain git repo for configs. Can anyone please help me with the steps to uninstall/remove the app.     Thanks
Completed this,  I added | table status, range got rid of any colour on the dashboard and the colour of the range took over.  
index=serverX sourcetype=CAServer  | dedup ID | stats count | eval status=if(count=00,"XXX is ok","XXX is not ok") | rangemap field=count low=0-0 severe=1-100 This works and replies with 34 co... See more...
index=serverX sourcetype=CAServer  | dedup ID | stats count | eval status=if(count=00,"XXX is ok","XXX is not ok") | rangemap field=count low=0-0 severe=1-100 This works and replies with 34 counts and is red, however i want to return the status with the red not just the number. I can return the status with  | stats status   but it is in black and white, any help is appreciated. 
How can I get the SVC Usage for Saved Searches and Ad-hoc searches?  These logs don't have it. index="_internal" data.searchType="scheduled"   index="_audit" sourcetype="audittrail" action="search... See more...
How can I get the SVC Usage for Saved Searches and Ad-hoc searches?  These logs don't have it. index="_internal" data.searchType="scheduled"   index="_audit" sourcetype="audittrail" action="search" info="completed"  
Thank you! This is really helpful.
I had the same symptoms. In my case, the root cause of the problem was that I was attempting to use the "Add Data" upload functionality on the search-head (i.e. at https://myhost.splunkcloud.com/) i... See more...
I had the same symptoms. In my case, the root cause of the problem was that I was attempting to use the "Add Data" upload functionality on the search-head (i.e. at https://myhost.splunkcloud.com/) instead of at the IDM (i.e. at https://idm.myhost.splunkcloud.com/). Attempting to upload it there worked without problems.   I do find it frustrating that both the IDM and the search head seem to use the same interface and you just need to expect various chunks of functionality to be broken on one and work on the other, but I guess as far as things go it's not too bad to just have a principle where "if something fails on one, try it on the other".
I see what you mean about missing the | addinfo, my mistake.  I tried 3 different ways.  With time picker token and without. I also tried with No token and with | addinfo but I'm still getting an err... See more...
I see what you mean about missing the | addinfo, my mistake.  I tried 3 different ways.  With time picker token and without. I also tried with No token and with | addinfo but I'm still getting an error.    
Hi! Faced with writing a query with an additional check and I can't find a way out. I will be glad if you tell me the direction or help with advice. We have the following custom logic: 1. When use... See more...
Hi! Faced with writing a query with an additional check and I can't find a way out. I will be glad if you tell me the direction or help with advice. We have the following custom logic: 1. When user do some action(it is not important) we generate an event in index=custom with the following fields: evt_id: 1,  user_id: 555 (example) 2. The user should confirm that he is doing this "some action" in third-party app, and this app generate to the index=custom the next event: evt_id: 2, user_id:555 (example) msg:confirmed 3. If user NOT CONFIRMED the SOME ACTION from step 1 - we need to generate alert. It means, that Splunk didn't receive evt_id:2 in index=custom  The alert logic is following: We need to alert when  evt_id: 1 was more than 5 minutes ago(the time that the user has to confirm "some action') and when NO evt_id: 2 with the same user_id by the time the alert starts working.  I understood that I need to do the first search like(example): index=custom evt_id=1 earliest=-5m latest=-7m But I have no idea how to implement additional condition with evt_id:2. if we didn't have the user_id field, then I could use stats  count command but I need  to correlate both events(1 and 2) with the field user_id.  Thanks for you help, have a nice day.
Thank you. I knew there was probably some way to iterate, but couldnt figure it out. Thank you. 
Thank you. That did the trick. Adding a  | stats values(open_ports) by destination  allows me to group and add them all in one row.  Thank you again for the prompt help  
Splunk PS installed UBA a while back, and I just noticed that we are not getting OS logs from those servers into Splunk Enterprise.  Since we have a 10 node cluster, I was trying to find a quicker wa... See more...
Splunk PS installed UBA a while back, and I just noticed that we are not getting OS logs from those servers into Splunk Enterprise.  Since we have a 10 node cluster, I was trying to find a quicker way to manage them.  Is there a reason I shouldn't connect the Splunk Enterprise running on all of those nodes to the deployment server?
Hello, community, I wanted to share a challenge that I have mapping fields to Data Models.  The issue is that I have identified/created fields that are required for a Deta Set, but they are not aut... See more...
Hello, community, I wanted to share a challenge that I have mapping fields to Data Models.  The issue is that I have identified/created fields that are required for a Deta Set, but they are not auto-populating e.g. cannot be seen by the Data Model/Set. Any suggestions of where I might be getting wrong? Regards, Dan 
i tried couple of time for the same error but seen the same error. after that i tried above work around   $SPLUNK_HOME/etc/apps/ForensicInvestigator/default/data/ui/views iconv -f UTF-16LE -t... See more...
i tried couple of time for the same error but seen the same error. after that i tried above work around   $SPLUNK_HOME/etc/apps/ForensicInvestigator/default/data/ui/views iconv -f UTF-16LE -t UTF-8 portsservices.xml -o portsservices1.xml i removed the old file and renamed the portsservices1.xml to portsservices.xml and it worked for me  after that i started the splunk everything is working as expected.
I strongly advise against modifying datamodels that are not your own.  If you change a DM, your changes will override any future versions of the DM that may be released. Instead, have your dashboard... See more...
I strongly advise against modifying datamodels that are not your own.  If you change a DM, your changes will override any future versions of the DM that may be released. Instead, have your dashboard combine the values by changing "dropped" to "blocked". | eval IDS_Attacks.action=if(IDS_Attacks.action="dropped","blocked",IDS_Attacks.action)  
As per the documentation you need to have the (?) when using Oracle, as I understand it that is for the OUT REFCURSOR, if I take it out I get an error. Plus when there is no data to return I g... See more...
As per the documentation you need to have the (?) when using Oracle, as I understand it that is for the OUT REFCURSOR, if I take it out I get an error. Plus when there is no data to return I get this from Splunk so it works with (?) Which matches what I see when the SQL run directly from the server The problem seems to be when there is data to be returned but I am not sure what the issue is. 
The output may not be what is desired, but it is correct.  The streamstats and delta commands compute the difference between the current result and the previous result rather than between the current... See more...
The output may not be what is desired, but it is correct.  The streamstats and delta commands compute the difference between the current result and the previous result rather than between the current result and the next result (which is unseen at the time). One workaround may be to surround the streamstats or delta command with reverse, which will change the order of events and then change it back. index=data sourcetype=dataset source="/usr2/data/data_STATISTICS.txt" SQL_ID= ABCD | reverse |streamstats current=f window=1 global=f last(NEWCPUTIME) as last_field by SQL_ID | reverse |eval NEW_CPU_VALUE =abs(last_field - NEWCPUTIME) |table _time,SQL_ID, last_field,NEWCPUTIME,NEW_CPU_VALUE  
Is it possible to display textual (string) values instead of numbers on the Y axis? I have a time series with a field called "state", which contains an integer number. Each number represents a cer... See more...
Is it possible to display textual (string) values instead of numbers on the Y axis? I have a time series with a field called "state", which contains an integer number. Each number represents a certain state. Examples: 0="off", 1="on" 0="off", 1="degraded", 2="standby", 3="normal", 4="boost" Now I would like to have a line or bar chart showing the respective words on the Y axis ticks instead of 0, 1, 2, 3, 4. Note: This was already asked but not answered satisfactorily: https://community.splunk.com/t5/Splunk-Search/Is-it-possible-to-make-y-axis-labels-display-quot-on-quot-and/m-p/222217 
@richgalloway Hi , Tried the below one. we are getting error as below. Error in where command: The operator at '::trapdown AND _time<=relative_time(now(),"-5m") is invalid. Please help me. Thanks!