All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have started using splunk very recently and I have a couple on monitors in my network which I want to monitor via splunk and integrate snort into splunk so that I get a good dashboard so that I can... See more...
I have started using splunk very recently and I have a couple on monitors in my network which I want to monitor via splunk and integrate snort into splunk so that I get a good dashboard so that I can monitor my network and device logs when I wanted to do that I am unable to configure my forwarders as my splunk enterprise server is running with the Ip 127.0.0.1:8000 but  I want it to run on 192.168.0.112:8000 but cannot find which file to edit .  Fyi I am running splunk server on windows Os and want to connect forwarders from both ubuntu and windows clients can anyone please help me
Hello Splunk Community, I have installed the Cloudflare for Splunk app on Splunk Cloud and have successfully configured Logpush to send logs from Cloudflare to Splunk following the official instruct... See more...
Hello Splunk Community, I have installed the Cloudflare for Splunk app on Splunk Cloud and have successfully configured Logpush to send logs from Cloudflare to Splunk following the official instructions. I have verified that the logs are arriving correctly in Splunk using search queries like: https://splunkbase.splunk.com/app/4501 index=cloudflare | head 10 I can see the logs in the search results, confirming that data ingestion is working. However, when I open the Cloudflare for Splunk dashboards, they are empty, showing "No results found". I've checked the following topics. Checked Data Arrival - Logs are arriving correctly in Splunk (index=cloudflare contains data). Confirmed Sourcetype - The logs are being assigned the expected sourcetype (cloudflare:access, cloudflare:network, etc.). Verified Time Range - Made sure the dashboards are set to a broad time range (Last 24 hours or All Time). Checked Permissions - Ensured that the user running the dashboards has access to the cloudflare index. Examined Dashboard Searches - Manually ran the searches used in the Cloudflare dashboards, but they returned no results. Questions: Has anyone faced this issue before? Are there any known fixes or configuration adjustments required for the Cloudflare for Splunk dashboards to populate correctly? Do I need to manually adjust field extractions or event types for the dashboards to work? I appreciate any guidance or recommendations you can provide. Thanks in advance for your help! Best regards,
Is there a search query to give the list of all the knowledge objects that are enabled in ES , i want to have list of all the correlation searches, macros , lookups and all searches.
I am trying to fix the issue of my zeek logs not being broken into separate events. These logs are in json format and start with '{"ts":' and end with '}' (excluding single quotes). Given they are on... See more...
I am trying to fix the issue of my zeek logs not being broken into separate events. These logs are in json format and start with '{"ts":' and end with '}' (excluding single quotes). Given they are on separate lines, I would expect the code below to work. # In /opt/splunk/etc/system/local/props.conf # which I copied from the ../default/props.conf [default] ... LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = True ...  At this point I think my issue may be not knowing which stanza to place that in. I do have the SPLUNK_TA_ZEEK add-on, but that is in a specific app (not S&R). Looking under sourcetypes in the Web UI, there are zeek, zeek:conn, bro, bro_conn, etc sourcetypes, but my sourcetypes in my events are zeek_conn, etc. I went ahead and applied the above code to zeek, zeek:conn, bro, and bro_conn. TLDR: 1. What stanza do I edit? 2. Is the code snippet the correct settings? 3. Do I need to restart the cluster to apply these changes?
I have a splunk where one of the eval method as part of main splunk query is as below.Iam not sure why SnapshotTimestamp is divided by 1000 but I presume it could be done to convert it to seconds.Sor... See more...
I have a splunk where one of the eval method as part of main splunk query is as below.Iam not sure why SnapshotTimestamp is divided by 1000 but I presume it could be done to convert it to seconds.Sorry am a newbie | eval snapshot_processed = strftime(SnapshotTimestamp/1000, "%Y-%m-%d %H:%M:%S") Iam trying to find the # of days clasped between "snapshot_processed" and today. I tried to modify the splunk as below and then try to view the table for "latencyInDays".However it does not return any value. | eval nowstring=now() | eval latencyInDays=(nowstring-snapshot_processed)/86400    What am I missing?
I have below combo for region/environment   Cluster Space Region 1 useast01 abs-qpp1 QAEAST 2 uswest01 abs-qpp1 QAWEST 3 usqaf01 abs-qff QAF   1) If Cluster is useas... See more...
I have below combo for region/environment   Cluster Space Region 1 useast01 abs-qpp1 QAEAST 2 uswest01 abs-qpp1 QAWEST 3 usqaf01 abs-qff QAF   1) If Cluster is useast01 and space is abs-qpp1 then the region is USEAST 2) If the cluster is uswest01 and space is abs-qpp1 then the region is USWEST 3) If the cluster is usqaf01 and space is abs-qff then the region is USQAF I would like to have a single dropdown filter for Region (USEAST, USWEST, USQAF) with dynamic label select option for the above combos. Request if someone provides a solution or an approach o
Hi splunkers, is it possible to restrict indexaccess to specific appcontext? like a user has read access to app a and write access to app b app a has dashboards on index a inside app b has da... See more...
Hi splunkers, is it possible to restrict indexaccess to specific appcontext? like a user has read access to app a and write access to app b app a has dashboards on index a inside app b has dashboards on index b but searching through index a is not allowed inside app b because we have built a firewall selfservice, where people can check if their connection get blocked by firewall and if so, they can open a ticket by one click. Now we encounter some usergroups that want to be able to search on their own in their own app. With this, they currently could freely search and analyse our firewall data beyond checking if their connection gets blocked or not. How can we achieve accesscontrol like this if its even possible? Thanks in advance!
Hey there & Daniel  Simple quick question : Does Event Timeline Viz works with Dashboard Studio too ? Thanks & regards, Christophe
Hello, I have a problem that I can't solve. I have a shcluster with 4 members (including the Captain) and splunk version 7.3.5.   We are in a multisite configuration. We wanted to do a test to pu... See more...
Hello, I have a problem that I can't solve. I have a shcluster with 4 members (including the Captain) and splunk version 7.3.5.   We are in a multisite configuration. We wanted to do a test to put a Search Head in stand-alone mode and simulate a power cut with the 3 others. Everything worked, then we returned to normal. ALL CLEAR.   But recently we realized that we had a problem (bug?). Our 4 SHC members are in the same cluster, checked on the servers directly in CLI. But on the GUI we have two different SHcluster: the first with 3 members, the second with only 1.   show shcluster-status shows the cluster, its 4 members and its ID (starting with EDF6) The [shclustering] stanza in the server.conf file for the 4 search heads has the ID EDF6[...].   I remind you that despite this, everything works normally. We've tried a lot of solutions with no results. Is this a bug or do you have any ideas?   Attached are some screenshots, to make things easier. Thank you very much
Hello All,    This is my first post . I have just started learning writing splunk query .  Ok so we have one application sitting in kubernates cluster . We are calling end point of application  an... See more...
Hello All,    This is my first post . I have just started learning writing splunk query .  Ok so we have one application sitting in kubernates cluster . We are calling end point of application  and doing some activity . I am seeing in logs json which we sent while calling endpoint.     { "header": { "version": "1.0", "sender": "ABC", "publishDateTime": "2025-03-12T15:54:32Z" }, "audit": { "addDateTime": "2024-04-19 05:42:57", "addBy": "PP" } }   I want to find count of all request I have made where I am seeing messages  as addBy as PP   I was trying to use multiple things like spath search but not getting how to do .     kubernetes_cluster="abc*" index="aaaa" sourcetype = "kubernetes_logs" source = *pub-sub*  |spath output=myfiled path=audit.addBy    | stats count by myfiled
I have a table with three columns. When I create a line chart using Visualization options, it uses column1 as x-axis and column2 as y-axis. When I hover over the dots, it shows the text for the Y axi... See more...
I have a table with three columns. When I create a line chart using Visualization options, it uses column1 as x-axis and column2 as y-axis. When I hover over the dots, it shows the text for the Y axis value. I would like to display column3 value when I hover over the dots. How can I do this?
Hi Team, I have a multivalue field in one of the user fields, along with other fields. However, when exporting the data to an external lookup, the multivalue field is converted into a single value, ... See more...
Hi Team, I have a multivalue field in one of the user fields, along with other fields. However, when exporting the data to an external lookup, the multivalue field is converted into a single value, comma-separated value. For example, in my search, the userid field appears as follows: userid 890000 pklstu 790000 c_pklstu However, after exporting to the external lookup, it transforms into: userid 890000,pklstu,790000,c_pklstu I need the multivalue field to remain unchanged in the external lookup so that I can accurately compare user IDs with other lookups. I have tried using mvexpand before exporting, but it introduced other challenges. Is there a way to ensure the multivalue field remains intact while exporting to the external lookup?"  
I'm brand new to this and am hopeful this has a ready-made answer I've not been able to find (yet) but: We installed the universal forwarder from our Splunk Cloud instructions: Set up the .spl ... See more...
I'm brand new to this and am hopeful this has a ready-made answer I've not been able to find (yet) but: We installed the universal forwarder from our Splunk Cloud instructions: Set up the .spl file and added a monitor to a log4j folder of a software that server runs.  How we set this up on our non-Windows systems is with indexer tokens that are used at setup.  In my case with this windows system, the installation and set up goes fine. I don't see any errors in the splunkd.log on the host machine. But there's no data for that index.  How do I add the specific index token to the universal forwarder?
Hi Need help in finding DistinctAdminUserCount and DistinctAdminUserNames of each associated Name inside test or prod object {"prod":{},"test":{"DistinctAdminUser":["streaming","Create","","Applica... See more...
Hi Need help in finding DistinctAdminUserCount and DistinctAdminUserNames of each associated Name inside test or prod object {"prod":{},"test":{"DistinctAdminUser":["streaming","Create","","Application.","App.","App.","obi","Users","platform",],"TotalSinkAdminUsers":33,"TotalNSP3Count":11,"TotalSourceAdminUsers":10,"DistinctAdminUserCount":11,"TotalStreamAdminUsers":12,"TotalAdminUser":55,"nsp3s":[{"StreamAdminUserNames":["App."],"SourceAdminUserNames":["preprod"],"DistinctAdminUserCount":5,"SinkAdminUserCount":5,"SourceAdminUserCount":1,"DistinctAdminUserNames”:[“Technology”,”2”,3””,”4”,”5”],”StreamAdminUserCount":1,"TotalAdminUserCount":7,"SinkAdminUserNames":["obi"],"Name”:”hi-cost-test-sample“},{“StreamAdminUserNames":["preprod"],"SourceAdminUserNames":["admin.preprod"],"DistinctAdminUserCount":3,"SinkAdminUserCount":3,"SourceAdminUserCount":1,"DistinctAdminUserNames":["preprod”,2”,3””,”4”,”5”],”StreamAdminUserCount":1,"TotalAdminUserCount":5,"SinkAdminUserNames":["ops-tform"],"Name”:”hi-cost-test-name”},”subscriberId":"NSP3"}   index="*" source="*" | spath test.nsps{} output=nsps | mvexpand nsps | spath input=nsps Name output=Name | spath input=nsps ReadOnlyConsumerNames{} output=ReadOnlyConsumerNames | search Name="" | stats values(ReadOnlyConsumerNames) as ReadOnlyConsumerNames by Name | rename Name as EntityName | table EntityName ReadOnlyConsumerNames Need  
Hi Using below query to capture 4xx,5xx error ,but getting as no result found  index=* source IN ("/aws/lambda/*") msg="**" (error.status=4* OR error.status=5*) | eval status=case(like(error.st... See more...
Hi Using below query to capture 4xx,5xx error ,but getting as no result found  index=* source IN ("/aws/lambda/*") msg="**" (error.status=4* OR error.status=5*) | eval status=case(like(error.status, "4%"), "4xx", like(error.status, "5%"), "5xx") | stats count by error.status {"name":"","","pid":8,"level":50,"error":{"message":"Request failed with status code 500","name":"AxiosError","stack":"AxiosError: Request failed with status code 500\n )","config":{"transitional":{"silentJSONParsing":true,"forcedJSONParsing":true,"clarifyTimeoutError":false},"adapter":["xhr","http"],"transformRequest":[null],"transformResponse":[null],"timeout":0,"xsrfCookieName":"X","xsrfHeaderName":"X-","maxContentLength":-1,"maxBodyLength":-1,"env":{},"headers":{"Accept":"application/json, text/plain, */*","Content-Type":"application/json","Authorization":"","User-Agent":"","Accept-Encoding":"gzip, compress, deflate, br"},"method":"get",""},"code":"ERR_BAD_RESPONSE","status":500},"eventAttributes":{"Identifier":2025732,"VersionNumber":"A.43"},"msg":"msg:data:error","time":":48:38.213Z","v":0} this is my raw event format mostly looks like 
If you download https://splunkbase.splunk.com/app/7208 Full Tor Node List Lookup App, it comes already with a csv file with IPs in it and post configuring this app, it does not override that lookup a... See more...
If you download https://splunkbase.splunk.com/app/7208 Full Tor Node List Lookup App, it comes already with a csv file with IPs in it and post configuring this app, it does not override that lookup and also write to the index mentioned while configuring the input. @efloss   
Splunk: 8.0.3 (I know its old we're working on approvals to upgrade) We’re receiving behavior I have never encountered before in Windows based server and I want to see if anybody else has encounte... See more...
Splunk: 8.0.3 (I know its old we're working on approvals to upgrade) We’re receiving behavior I have never encountered before in Windows based server and I want to see if anybody else has encountered here because this may be happening on many of our systems where users claim the product isn’t working. We have a tstats command running on a datamodel for a dashboard. When loading less than 24 hours worth of results the panel work as expected. The second we switch to a date range (March 11 – March 11 as an example) the other panels load fine but this one takes much longer to load (up from 1.1 minutes to over 5 minutes). At some point in loading the results begin shifting fields. For instance Normal: Time Host User Status Description System <time> <host> <user> <status> <description> <system>   Then new results begin showing up: Time Host User Status Description System   <tags> <status> <host> <time>   This continues on and on until eventually the search fails and the following error is presented (one example): “StatsFileReader file open failed file=D:\Splunk\var\run\splunk\dispatch\_aWEtbG96ZW5k_ aWEtbG96ZW5k _US1BdWrpdA__search8_1741807955.367128\statstmp_21805.sb.lz4” I’ve done the following to troubleshoot: Turned off data model acceleration Verified they’re running the default view and not a custom one Verified this happens on multiple dashboards using similar tstats search If I try to replicate in a | from datamodel search I do not see this happening. Seems to only happen with the tstats based search Click the “Open in Search” and saw the exact behavior there as well o Job inspector shows a lot of the following error: ERROR Bucket – Failed to discretize value ‘report’ of field ‘_time’. There’s 4 log files worth of these…However there’s a bunch of different values: track_event_signatures, windows, etc After these it says skipping prestats because input looks already in prestats format Here is an copy of the tstats query that has been modified a little because this is from a paid app and I don't want to upset the publisher: | tstats prestats=true summariesonly=false allow_old_summaries=false count as count FROM datamodel=Privileged WHERE (nodename=Privileged_Execution "Privileged_Execution.tag"=* "Privileged_Execution.user"="*" host="*" ) BY _time span=1s, host, "Privileged_Execution.process", "Privileged_Execution.user", "Privileged_Execution.description", "Privileged_Execution.status", "Privileged_Execution.tag" | bucket _time span=1s | stats dedup_splitvals=t count AS count by _time, host, Privileged_Execution.process, Privileged_Execution.user, Privileged_Execution.description, Privileged_Execution.status, Privileged_Execution.tag | sort limit=`recent_events_tables_limit` -_time | rename _time as Time, host as Host, "Privileged_Execution.process" as Process, "Privileged_Execution.user" as User, "Privileged_Execution.description" as Description, "Privileged_Execution.status" as Status, "Privileged_Execution.tag" as tag | fillnull count | fields + Time, Host, Process, User, Description, Status, tag, count | join max=0 type=left tag [| inputlookup system_tag | rename system as System] | fields - tag, count | fillnull value="" System | mvcombine System | sort 0 - Time | convert timeformat="%m/%d/%Y %H:%M:%S %z" ctime(Time)
I'm having trouble getting my duration into the format I'd prefer... I'd like to see the duration to be MM:SS. However, despite a few different approaches, I keep getting milliseconds.    
Are you recommending enableOldS2SProtocol=true? Are you implementing  enableOldS2SProtocol=true? If yes, read below. Splunk has dropped support for oldest S2S version. However added enableOldS2S... See more...
Are you recommending enableOldS2SProtocol=true? Are you implementing  enableOldS2SProtocol=true? If yes, read below. Splunk has dropped support for oldest S2S version. However added enableOldS2SProtocol config to allow forwarder use oldest protocol. With enableOldS2SProtocol=true, forwarder is allowed to  use oldest protocol (protocol level 0). First ever protocol. You are essentially using almost 20 years old protocol. With enableOldS2SProtocol=false, forwarder is allowed to  use minimum protocol level 1 with negotiateProtocolLevel config. If negotiateProtocolLevel is not set( by default not set), then forwarder and receiver will be negotiating latest common protocol supported by forwarder and receiver. If you are on Splunk 9.2.x receiver and forwarder is 9.0.x and above, then protocol 6 is being used. When protocol negotiation happens between fwd and receiver, if the receiver says  protocol 0, fwd does not accept that and still use minimum supported protocol 1 unless enableOldS2SProtocol=true is set on fwd. Suggesting enableOldS2SProtocol=true on fwd means receiver is only capable of protocol 0 and forcing fwd to use protocol 0. Suggesting enableOldS2SProtocol=true and negotiateProtocolLevel=0 on fwd means fwd is forced to use protocol 0 regardless of receiver's protocol level. Protocol levels. 0: Maximum network traffic over S2S connection. 1: Network traffic optimization over S2S connection. 2: Additional network traffic optimization over S2S connection. 3: Metric support. 4: Ack support for rawless metric events. 5: Flag potential dup events. 6: Flag for cloned metric events so that cloned events exempted from license usage. 7: SSL certificate requests Make an informed decision.
When will the Microsoft Fabric Add-on for Splunk be available for Splunk Cloud?