All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have installed Splunk Enterprise free trial into a VM as a root user. I know the best practice is to avoid using root to run as Splunk in case the underlying OS gets compromised and then the hacker... See more...
I have installed Splunk Enterprise free trial into a VM as a root user. I know the best practice is to avoid using root to run as Splunk in case the underlying OS gets compromised and then the hacker has access to your OS with root level. I am following the doc online and it says once you install Splunk as root, don't start the Splunk installation but rather add a new user and then change ownership of the Splunk folder to that new non-root user   But before I do that, when Splunk is installed I check its ownership and it's already configured to Splunk. Does this mean Splunk has already configured a non-root user automatically upon installation?   If so, how would I make sure it has read access to local files I want to monitor?
Hi, How do we copy the files .tgz from windows server to the Linux box ? Can anyone help me in doin this?  
Hi I’m trying to create two searches and having some problems. I hope somebody could help me with this. 1. 7 or more IDS Alerts from a single IP Address in one minute. I created something like bel... See more...
Hi I’m trying to create two searches and having some problems. I hope somebody could help me with this. 1. 7 or more IDS Alerts from a single IP Address in one minute. I created something like below, but it doesn’t seem to be working correctly: index=ids | streamstats count time_window=1m by src_ip | where count >=7 | stats values(dest_ip) as "Destination IP" values(attack) as "Attack" values(severity) as "Severity" values(host) as "FW" count by "Source IP" 2. 5 or more hosts in 1h attacked with the same IDS Signature This seems to be even more complex as it has 3 conditions: 5 hosts 1h The same IPS signature So, I’m not sure how to even start after failing first one. Could somebody help me with this please?
I am trying to use a table column for a drilldown but not display it. In XML dashboards I could do it by specifying: ``` <fields>["field1","field2"...]</fields> ``` I would then still be able... See more...
I am trying to use a table column for a drilldown but not display it. In XML dashboards I could do it by specifying: ``` <fields>["field1","field2"...]</fields> ``` I would then still be able to use field3 in setting drilldown tokens. How can I do this in Dashboard Studio? I can't find a way to hide a column without removing the ability to then also refer to it in tokens.
Let's say that I have a dashboard A containing a table. App Name App Host LinkToB LinkToC LinkToD abc host 1 LinkToB LinkToC LinkToD def host 2 LinkToB LinkToC LinkToD xyz h... See more...
Let's say that I have a dashboard A containing a table. App Name App Host LinkToB LinkToC LinkToD abc host 1 LinkToB LinkToC LinkToD def host 2 LinkToB LinkToC LinkToD xyz host 1 LinkToB LinkToC LinkToD   I have 3 other dashboards (B,C,D). I want to click "LinkToX" to link to X dashboard. However, in Splunk Dashboard Studio UI, I can only link the table to one dashboard. Is there any way to configure JSON to make the table able to link to multiple dashboard? Or is there any way to make cells in table become clickable URL link instead? Thank you!
Hi AppDynamics team, I'm trying to configure the windows service application with Unhandled exception Error to monitor using .NET agent referring the below link. Configure the .NET Agent for Window... See more...
Hi AppDynamics team, I'm trying to configure the windows service application with Unhandled exception Error to monitor using .NET agent referring the below link. Configure the .NET Agent for Windows Services and Standalone Applications (appdynamics.com) here is the part of config.xml file which i have added <standalone-applications> <standalone-application executable="D:\sample project\MQ_ConsoleApp1\MQ_ConsoleApp1\bin\x64\Release\MQ_ConsoleApp1.exe"> <tier name="DotNet Tier" /> </standalone-application> and also, I have tried configuring the entry points as well for the windows service. but unable to get the transactions. Please let me know if i missed any more configuration steps. please help me in resolving the issue.  Thanks in advance.
Hello Splunkers, I m currently implementing a connection from multiple GCP Buket to Splunk enterprise. The Add-on automatically index the datas from those buckets on the _timestamps it get them (So... See more...
Hello Splunkers, I m currently implementing a connection from multiple GCP Buket to Splunk enterprise. The Add-on automatically index the datas from those buckets on the _timestamps it get them (So if I have a list of transactions from mars to november 2023, that are forwarded today, they will still be index at the same time. However, I would like for some of those datas to be indexed using a timefields present in the data, depending on the apps that use them (For example App 1 has a time fields named "Start_date" and app 2 has another one named "end_date") Unfortunately, i cant think of a way to do it, maybe in the props.conf file, but I'm not sure. Any advices? Thanks
Hi Team, While running the below search we are not getting license calculation for 2-3 indexes(showing 0) but for other indexes I am able to see the results. index=_internal source="*license_usag... See more...
Hi Team, While running the below search we are not getting license calculation for 2-3 indexes(showing 0) but for other indexes I am able to see the results. index=_internal source="*license_usage.log" sourcetype=splunkd | stats sum(b) as Bytes by idx | eval GB=round(Bytes/1024/1024/1024,3) | rename h as Host, s as Source, st as Sourcetype, idx as Index, GB as "License Used in GB" | table Index, "License Used in GB" I am trying to understand why it is happening for only 2-3 indexes. We have the index data present on both the indexers. 
I was trying to configure the forwarder for a while and couldn't succeed, therefore I was watching a video where the person told to make sure to have your status enabled. I thought that the reason th... See more...
I was trying to configure the forwarder for a while and couldn't succeed, therefore I was watching a video where the person told to make sure to have your status enabled. I thought that the reason that I am not receiving data could be that I might have it as disabled. Then I proceeded to enable everything on the manage apps section. Then I got the message that I needed to restart however the website couldn't automatically restart it by itself and told me to do it through the command line. I've searched for it but couldn't find it. I then decided to restart the pc, afterwards when I opened the website I got the message of " This site can’t be reached 127.0.0.1 refused to connect." I then tried to stop and start the splunkd from the cmd with admin access but it didn't quite fix it either Example of manage apps section (NOT MINE)
How to extract field from below event I want nname,ID,app and Time , here nname is mule_330299_prod_App01_Clt1 ID=91826354-d521-4a01-999f-35953d99b829 app=870a76ea-8033-443c-a312-834363u3d Time=2... See more...
How to extract field from below event I want nname,ID,app and Time , here nname is mule_330299_prod_App01_Clt1 ID=91826354-d521-4a01-999f-35953d99b829 app=870a76ea-8033-443c-a312-834363u3d Time=2023-12-23T14:22:43.025Z CSV Content:nname,Id,app,Time mule_330299_prod_App01_Clt1,91826354-d521-4a01-999f-35953d99b829,870a76ea-8033-443c-a312-834363u3d,2023-12-23T14:22:43.025Z mule_29999_dev_WebApp01_clt1,152g382226vi-44e6-9721-aa7c1ea1ec1b,26228e-28sgsbx-943b-58b20a5c74c6,2024-01-06T13:29:15.762867Z  like this we have multiple lines in one event 
Hello. I am trying to route some events to a different index based on a field on the events. The events are JSON formatted. This is an example: { "topic": "audits", "events": [ {... See more...
Hello. I am trying to route some events to a different index based on a field on the events. The events are JSON formatted. This is an example: { "topic": "audits", "events": [ { "admin_name": "john doe john.doe@juniper.net", "device_id": "00000000-0000-0000-1000-5c5b35xxxxxx", "id": "8e00dd48-b918-4d9b-xxxx-xxxxxxxxxxxx", "message": "Update Device \"Reception\"", "org_id": "2818e386-8dec-2562-xxxx-xxxxxxxxxxx", "site_id": "4ac1dcf4-9d8b-7211-xxxx-xxxxxxxxxxxx", "src_ip": "xx.xx.xx.xx", "timestamp": 1549047906.201053 } ] } We are receiving the events into a heavy forwarder. And we forward them the event to an indexer. We want to send the events with the topic audits to a different index than the default one (imp_low). I have tried with these settings in the heavy forwarder:   -Props.conf --------------------------------------------- [_json-Mist_Juniper] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured pulldown_type = 1 TRANSFORMS-force_index = setindexHIGH -Transforms .conf: ------------------------- [setindexHIGH] SOURCE_KEY = topic REGEX = (audits) DEST_KEY = _MetaData:Index FORMAT = imp_high   But it is not working, all the events are going to the "imp_low" index.  Thanks
Hi I have a dashboard that displays CSV I want to add lists for him to display that are not in the CSV But the list I'm adding includes the records that are in the CSV I want to create a list tha... See more...
Hi I have a dashboard that displays CSV I want to add lists for him to display that are not in the CSV But the list I'm adding includes the records that are in the CSV I want to create a list that will not include the records in the CSV This code gets me the whole list   | index="------" interface="--" |stats values(interface) as importers   This code brings me the list from the CSV   index="------------" code=* | search [|inputlookup importers.csv |lookup importers.csv interfaceName OUTPUTNEW system environment timerange |stats values(interfaceName) as importers_csv     I want a code that brings me the list without the records in the CSV Thanks  
I need to drop EventCode 4634 and 4624 with Login_type 3, how i can use nullqueue option and write the correct REGEX on transforms.conf .
Hello, I would like to create a compliance user by allowing read only access to all knowledge objects and dashboards in our Splunk environment. I have allowed read permissions on all apps to that sp... See more...
Hello, I would like to create a compliance user by allowing read only access to all knowledge objects and dashboards in our Splunk environment. I have allowed read permissions on all apps to that specific role however, me as admin role can view almost double the amount of Alerts, Reports and Dashboards as the compliance role. What could be the cause here? and what could I be missing? Do I need to edit every single knowledge object and dashboard to allow permission for said role? Is there an easier method of doing this if so? Thanks, Regards,
Hi, How we can find the difference of these two date difference in year days hour min  from till 11/28/2023 03:38 PM 11/28/2024 04:08 PM
hello, i'm running a cisco sdwan fabric and i was curious if i can send data directly to cloud_splunk.  according to Cisco Catalyst SD-WAN Splunk Integration User Guide i should select for data inp... See more...
hello, i'm running a cisco sdwan fabric and i was curious if i can send data directly to cloud_splunk.  according to Cisco Catalyst SD-WAN Splunk Integration User Guide i should select for data input tcp/udp 514 syslog, but i don't have this option under data_inputs in cloud_splunk. is there a way to send the logs to cloud_splunk or i need a local installed instance of splunk? br, bazil
I need help in making the pie chart and put two pie's in it with one as success_transaction and other as error_transaction, when i am doing it, it is showing consolidated data with service name, also... See more...
I need help in making the pie chart and put two pie's in it with one as success_transaction and other as error_transaction, when i am doing it, it is showing consolidated data with service name, also i would need to put count inside those pie's and provide a Y Axis title wiht those fields names, i was using this query, please help me into solving this problem index="aio" Environment="POD" Appid="APP-53" ("Invokema : call() :") OR ("exception" OR level="ERROR" NOT "NOT RACT" NOT H0 NOT "N is null" NOT "[null" NOT "lid N") | rex field=_raw "00\s(?<service_name>\w+)-pod" | rex field=_raw "]\s(?<valid_by>.*?)\s\:\scall()" | eval success_flag = if(valid_by="Invokema", 1,0) | fillnull validate_by value=null | fillnull service_name value=nservice | eval error_flag = if(valid_by="null", 1,0) | stats sum(success_flag) as Success_Transaction, sum(error_flag) as Error_Transaction by service_name   you help will be appreciated.
Hello! As the subject of the question says, I'm trying to create SPL queries for several visualizations but it has become very tedious since spath does not work with the outputted events, as they co... See more...
Hello! As the subject of the question says, I'm trying to create SPL queries for several visualizations but it has become very tedious since spath does not work with the outputted events, as they come in a string format, making it very hard to work with more complex operations  The event contents are in a valid json format (checked using jsonformatter) Here's the event output:{"time":"time_here","kubernetes":{"host":"host_name_here","pod_name":"pod_name_here","namespace_name":"namespace_name_here","labels":{"app":"app_label"}},"log":{"jobId":"job_id_here","dc":"dc_here","stdout":"{ \"Componente\" :  \"componente_here\", \"channel\" :  \"channel_here\", \"timestamp\" :  \"timestamp_here\", \"Code\" :  \"code_here\", \"logId\" :  \"logid_here\", \"service\" :  \"service_here\", \"responseMessage\" :  \"responseMessage_here\", \"flow\" :  \"flow_here\", \"log\" :  \"log_here\"}","level":"info","host":"host_worker_here","flow":"flow_here","projectName":"project_name_here","caller":"caller_here"},"cluster_id":"cluster_id_here"}
Hello Community! I have recently been trying to get the AppDynamics work for monitoring an application which is built using NodeJS. It is a frontend application, which is hosted using an EC2. The w... See more...
Hello Community! I have recently been trying to get the AppDynamics work for monitoring an application which is built using NodeJS. It is a frontend application, which is hosted using an EC2. The webserver is Apache, which then proxy forwards the requests to the Node app running on a custom port within the EC2. I tried adding the generated script into the main.js startup file (which I used to start the node app using pm2 service), then tried restarting the web app, created load on the web app, but to no success. The connection check window keeps looping and nothing happens. I have successfully connected DB and Machine agents, but unable to get this NodeJS app monitored. My project's hosting directory doesn't include any server.js files where I can add the code snippet. As far as I can tell, it only has a main.js file and an index.html file. I am unable to get the code snippet to work till now. Any insights on this would be highly appreciated. Thanks!
Hello!  Is it possible to implement something like this? I have 300+ devices that send logs to one index. I want to check if there are no logs from the device for more than one minute then execute ... See more...
Hello!  Is it possible to implement something like this? I have 300+ devices that send logs to one index. I want to check if there are no logs from the device for more than one minute then execute an alert. When the device resumed logs then also a warning. And immediately after the warning update the csv file. My search now looks like this: | tstats latest(_time) as lastSeen where index IN("my_devs") earliest=-2m latest=now by host | lookup devs_hosts_names.csv host OUTPUT dev_name | eval dev_name = if(isnotnull(dev_name),dev_name,"unknow host") | eval status = if((now() - lastSeen<=60),"up","down") | eval status = if(isnotnull(lastSeen),status,"unknow") | search NOT [| inputlookup devs_status.csv | fields host dev_name status] | convert ctime(*Seen) | table host dev_name status lastSeen | - At this time of search I would like to trigger an alert for each dev_name and then rewrite (update)  devs_status.csv  But I don't find how it can be realized, I ask for your help. I'm new to splunk and don't understand how much this kind of request is normal for splunk? Thanks.