All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @yuanliu , Thanks for your response. If the user ID field contains multiple values separated by a delimiter, can it still be used as a lookup field when comparing it with other data that also co... See more...
Hi @yuanliu , Thanks for your response. If the user ID field contains multiple values separated by a delimiter, can it still be used as a lookup field when comparing it with other data that also contains user IDs? For example, if the field contains four user IDs and one of them exists in another lookup table, the lookup fails because the values are comma-separated. I would like to understand how to perform a lookup on multiple values separated by a delimiter.
Hi @gcusello, as per your comment, yes my UF has outbound rules set on port 9997, still not working. Please suggest    
I need the multivalue field to remain unchanged in the external lookup so that I can accurately compare user IDs with other lookups. I have tried using mvexpand before exporting, but it introduced ... See more...
I need the multivalue field to remain unchanged in the external lookup so that I can accurately compare user IDs with other lookups. I have tried using mvexpand before exporting, but it introduced By making the output a comma-delimited list is exactly keeping the multivalue unchanged.  If userid from other data sources are to match 890000, for example, the only way to do this is to make 890000 on its own row. If you want concrete help from volunteers, you need to carefully describe your use case.  How is this to "compare user IDs with other lookups?" What does the data look like?  What are symptoms of "problems" caused by mvexpand?  As always, illustrate your point with exact data (anonymize as necessary).
Is it possible that your raw event is noncompliant?  This is what your illustrated event format suggests.  If that format is exact, Splunk cannot extract anything other than "name" field.  There are ... See more...
Is it possible that your raw event is noncompliant?  This is what your illustrated event format suggests.  If that format is exact, Splunk cannot extract anything other than "name" field.  There are two elements that violates JSON syntax.  The mock event contains two bare strings that are not key-value pairs, as pointed out as "MISSING-KEY1" and "MISSING-KEY2" in the following pretty-print of a "corrected" conformant JSON object: { "name": "", "MISSING-KEY1": "", "pid": 8, "level": 50, "error": { "message": "Request failed with status code 500", "name": "AxiosError", "stack": "AxiosError: Request failed with status code 500\n )", "config": { "transitional": { "silentJSONParsing": true, "forcedJSONParsing": true, "clarifyTimeoutError": false }, "adapter": [ "xhr", "http" ], "transformRequest": [ null ], "transformResponse": [ null ], "timeout": 0, "xsrfCookieName": "X", "xsrfHeaderName": "X-", "maxContentLength": -1, "maxBodyLength": -1, "env": {}, "headers": { "Accept": "application/json, text/plain, */*", "Content-Type": "application/json", "Authorization": "", "User-Agent": "", "Accept-Encoding": "gzip, compress, deflate, br" }, "method": "get", "MISSING-KEY2": "" }, "code": "ERR_BAD_RESPONSE", "status": 500 }, "eventAttributes": { "Identifier": 2025732, "VersionNumber": "A.43" }, "msg": "msg:data:error", "time": ":48:38.213Z", "v": 0 } If your actual events are non-compliant, Splunk will not have a value for error.status. By the way,  the command "eval status=case(like(error.status, "4%"), "4xx", like(error.status, "5%"), "5xx")" is wasted as your stats command does not use the field status.
Hi @bowesmana , my bad, I meant splunk lookups. I'm using outputlookup command to export search output data to Splunk lookup. Regards VK  
Are your fields auto extracted, i.e. if you just do index=* source IN ("/aws/lambda/*") msg="**" in verbose search mode, do you see error.status in the left hand panel? If so, can you see value... See more...
Are your fields auto extracted, i.e. if you just do index=* source IN ("/aws/lambda/*") msg="**" in verbose search mode, do you see error.status in the left hand panel? If so, can you see values of 4xx and 5xx? It may be that if your JSON objects are longer than 5k, the status field may not be auto extracted, so you could try index=* source IN ("/aws/lambda/*") msg="**" | spath error.status | search (error.status=4* OR error.status=5*) | eval status=case(like(error.status, "4%"), "4xx", like(error.status, "5%"), "5xx") | stats count by error.status which will tell you if it's a JSON object limit
What do you mean an "external lookup" and how are you exporting it? Do you mean outputlookup or something else?
Based on your example data, that would appear to work. If you copy in this example search you can see your spath and stats command do indeed extract the correct data and give you a count of 1, so wh... See more...
Based on your example data, that would appear to work. If you copy in this example search you can see your spath and stats command do indeed extract the correct data and give you a count of 1, so what is your problem? Are you saying this is not working for you? If not, it would indicate your data is perhaps not as you have shown.
after we add tools.sessions.timeout , it is work  Thank you 
That!  I think our other systems were set up a bit differently. Adding the index to the inputs.conf was what I was looking for!   Many thanks!
Hello All,    This is my first post . I have just started learning writing splunk query .  Ok so we have one application sitting in kubernates cluster . We are calling end point of application  an... See more...
Hello All,    This is my first post . I have just started learning writing splunk query .  Ok so we have one application sitting in kubernates cluster . We are calling end point of application  and doing some activity . I am seeing in logs json which we sent while calling endpoint.     { "header": { "version": "1.0", "sender": "ABC", "publishDateTime": "2025-03-12T15:54:32Z" }, "audit": { "addDateTime": "2024-04-19 05:42:57", "addBy": "PP" } }   I want to find count of all request I have made where I am seeing messages  as addBy as PP   I was trying to use multiple things like spath search but not getting how to do .     kubernetes_cluster="abc*" index="aaaa" sourcetype = "kubernetes_logs" source = *pub-sub*  |spath output=myfiled path=audit.addBy    | stats count by myfiled
I have a table with three columns. When I create a line chart using Visualization options, it uses column1 as x-axis and column2 as y-axis. When I hover over the dots, it shows the text for the Y axi... See more...
I have a table with three columns. When I create a line chart using Visualization options, it uses column1 as x-axis and column2 as y-axis. When I hover over the dots, it shows the text for the Y axis value. I would like to display column3 value when I hover over the dots. How can I do this?
Hi Team, I have a multivalue field in one of the user fields, along with other fields. However, when exporting the data to an external lookup, the multivalue field is converted into a single value, ... See more...
Hi Team, I have a multivalue field in one of the user fields, along with other fields. However, when exporting the data to an external lookup, the multivalue field is converted into a single value, comma-separated value. For example, in my search, the userid field appears as follows: userid 890000 pklstu 790000 c_pklstu However, after exporting to the external lookup, it transforms into: userid 890000,pklstu,790000,c_pklstu I need the multivalue field to remain unchanged in the external lookup so that I can accurately compare user IDs with other lookups. I have tried using mvexpand before exporting, but it introduced other challenges. Is there a way to ensure the multivalue field remains intact while exporting to the external lookup?"  
WTH is an "index(er) token"? Forwarders are configured exactly the same way in Windows as in Linux (except for file paths) - create apps  in the Deployment Server and add that app to the appropriate... See more...
WTH is an "index(er) token"? Forwarders are configured exactly the same way in Windows as in Linux (except for file paths) - create apps  in the Deployment Server and add that app to the appropriate server class so the UFs download it.  You'll probably want at least three apps - the Universal Forwarder app as downloaded from Splunk Cloud, plus one for Linux inputs, and one for Windows inputs.  There should be no mucking about with the UF itself. The monitor stanza goes in the app's inputs.conf file.  It will look something like this. [monitor:///some/path/to/log4j/file.log] sourcetype = mysourcetype index = myindex [monitor://C:\Some\Path\to\log4j\file.log] sourcetype = mysourcetype index = myindex  
I'm brand new to this and am hopeful this has a ready-made answer I've not been able to find (yet) but: We installed the universal forwarder from our Splunk Cloud instructions: Set up the .spl ... See more...
I'm brand new to this and am hopeful this has a ready-made answer I've not been able to find (yet) but: We installed the universal forwarder from our Splunk Cloud instructions: Set up the .spl file and added a monitor to a log4j folder of a software that server runs.  How we set this up on our non-Windows systems is with indexer tokens that are used at setup.  In my case with this windows system, the installation and set up goes fine. I don't see any errors in the splunkd.log on the host machine. But there's no data for that index.  How do I add the specific index token to the universal forwarder?
Hi Need help in finding DistinctAdminUserCount and DistinctAdminUserNames of each associated Name inside test or prod object {"prod":{},"test":{"DistinctAdminUser":["streaming","Create","","Applica... See more...
Hi Need help in finding DistinctAdminUserCount and DistinctAdminUserNames of each associated Name inside test or prod object {"prod":{},"test":{"DistinctAdminUser":["streaming","Create","","Application.","App.","App.","obi","Users","platform",],"TotalSinkAdminUsers":33,"TotalNSP3Count":11,"TotalSourceAdminUsers":10,"DistinctAdminUserCount":11,"TotalStreamAdminUsers":12,"TotalAdminUser":55,"nsp3s":[{"StreamAdminUserNames":["App."],"SourceAdminUserNames":["preprod"],"DistinctAdminUserCount":5,"SinkAdminUserCount":5,"SourceAdminUserCount":1,"DistinctAdminUserNames”:[“Technology”,”2”,3””,”4”,”5”],”StreamAdminUserCount":1,"TotalAdminUserCount":7,"SinkAdminUserNames":["obi"],"Name”:”hi-cost-test-sample“},{“StreamAdminUserNames":["preprod"],"SourceAdminUserNames":["admin.preprod"],"DistinctAdminUserCount":3,"SinkAdminUserCount":3,"SourceAdminUserCount":1,"DistinctAdminUserNames":["preprod”,2”,3””,”4”,”5”],”StreamAdminUserCount":1,"TotalAdminUserCount":5,"SinkAdminUserNames":["ops-tform"],"Name”:”hi-cost-test-name”},”subscriberId":"NSP3"}   index="*" source="*" | spath test.nsps{} output=nsps | mvexpand nsps | spath input=nsps Name output=Name | spath input=nsps ReadOnlyConsumerNames{} output=ReadOnlyConsumerNames | search Name="" | stats values(ReadOnlyConsumerNames) as ReadOnlyConsumerNames by Name | rename Name as EntityName | table EntityName ReadOnlyConsumerNames Need  
Hi Using below query to capture 4xx,5xx error ,but getting as no result found  index=* source IN ("/aws/lambda/*") msg="**" (error.status=4* OR error.status=5*) | eval status=case(like(error.st... See more...
Hi Using below query to capture 4xx,5xx error ,but getting as no result found  index=* source IN ("/aws/lambda/*") msg="**" (error.status=4* OR error.status=5*) | eval status=case(like(error.status, "4%"), "4xx", like(error.status, "5%"), "5xx") | stats count by error.status {"name":"","","pid":8,"level":50,"error":{"message":"Request failed with status code 500","name":"AxiosError","stack":"AxiosError: Request failed with status code 500\n )","config":{"transitional":{"silentJSONParsing":true,"forcedJSONParsing":true,"clarifyTimeoutError":false},"adapter":["xhr","http"],"transformRequest":[null],"transformResponse":[null],"timeout":0,"xsrfCookieName":"X","xsrfHeaderName":"X-","maxContentLength":-1,"maxBodyLength":-1,"env":{},"headers":{"Accept":"application/json, text/plain, */*","Content-Type":"application/json","Authorization":"","User-Agent":"","Accept-Encoding":"gzip, compress, deflate, br"},"method":"get",""},"code":"ERR_BAD_RESPONSE","status":500},"eventAttributes":{"Identifier":2025732,"VersionNumber":"A.43"},"msg":"msg:data:error","time":":48:38.213Z","v":0} this is my raw event format mostly looks like 
Mmm, OK, I was going to suggest the SPL2 channel in the Splunk Slack group, but I see you already found that.
If you download https://splunkbase.splunk.com/app/7208 Full Tor Node List Lookup App, it comes already with a csv file with IPs in it and post configuring this app, it does not override that lookup a... See more...
If you download https://splunkbase.splunk.com/app/7208 Full Tor Node List Lookup App, it comes already with a csv file with IPs in it and post configuring this app, it does not override that lookup and also write to the index mentioned while configuring the input. @efloss   
Splunk: 8.0.3 (I know its old we're working on approvals to upgrade) We’re receiving behavior I have never encountered before in Windows based server and I want to see if anybody else has encounte... See more...
Splunk: 8.0.3 (I know its old we're working on approvals to upgrade) We’re receiving behavior I have never encountered before in Windows based server and I want to see if anybody else has encountered here because this may be happening on many of our systems where users claim the product isn’t working. We have a tstats command running on a datamodel for a dashboard. When loading less than 24 hours worth of results the panel work as expected. The second we switch to a date range (March 11 – March 11 as an example) the other panels load fine but this one takes much longer to load (up from 1.1 minutes to over 5 minutes). At some point in loading the results begin shifting fields. For instance Normal: Time Host User Status Description System <time> <host> <user> <status> <description> <system>   Then new results begin showing up: Time Host User Status Description System   <tags> <status> <host> <time>   This continues on and on until eventually the search fails and the following error is presented (one example): “StatsFileReader file open failed file=D:\Splunk\var\run\splunk\dispatch\_aWEtbG96ZW5k_ aWEtbG96ZW5k _US1BdWrpdA__search8_1741807955.367128\statstmp_21805.sb.lz4” I’ve done the following to troubleshoot: Turned off data model acceleration Verified they’re running the default view and not a custom one Verified this happens on multiple dashboards using similar tstats search If I try to replicate in a | from datamodel search I do not see this happening. Seems to only happen with the tstats based search Click the “Open in Search” and saw the exact behavior there as well o Job inspector shows a lot of the following error: ERROR Bucket – Failed to discretize value ‘report’ of field ‘_time’. There’s 4 log files worth of these…However there’s a bunch of different values: track_event_signatures, windows, etc After these it says skipping prestats because input looks already in prestats format Here is an copy of the tstats query that has been modified a little because this is from a paid app and I don't want to upset the publisher: | tstats prestats=true summariesonly=false allow_old_summaries=false count as count FROM datamodel=Privileged WHERE (nodename=Privileged_Execution "Privileged_Execution.tag"=* "Privileged_Execution.user"="*" host="*" ) BY _time span=1s, host, "Privileged_Execution.process", "Privileged_Execution.user", "Privileged_Execution.description", "Privileged_Execution.status", "Privileged_Execution.tag" | bucket _time span=1s | stats dedup_splitvals=t count AS count by _time, host, Privileged_Execution.process, Privileged_Execution.user, Privileged_Execution.description, Privileged_Execution.status, Privileged_Execution.tag | sort limit=`recent_events_tables_limit` -_time | rename _time as Time, host as Host, "Privileged_Execution.process" as Process, "Privileged_Execution.user" as User, "Privileged_Execution.description" as Description, "Privileged_Execution.status" as Status, "Privileged_Execution.tag" as tag | fillnull count | fields + Time, Host, Process, User, Description, Status, tag, count | join max=0 type=left tag [| inputlookup system_tag | rename system as System] | fields - tag, count | fillnull value="" System | mvcombine System | sort 0 - Time | convert timeformat="%m/%d/%Y %H:%M:%S %z" ctime(Time)