All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have had a few issues ingesting data into the correct index. We are deploying an app from the deployment server, and this particular app has two clients. Initially, when I set this app up, I was in... See more...
I have had a few issues ingesting data into the correct index. We are deploying an app from the deployment server, and this particular app has two clients. Initially, when I set this app up, I was ingesting data into our o365 index. This data looked somewhat like: We have a team running a script that tracks all deleted files. We were getting in one line per event. And at the time, I had the inputs.conf that looked like: [monitor://F:\scripts\DataDeletion\SplunkReports] index=o365 disabled=false source=DataDeletion It would ingest all CSV files within that DataDeletion Directory. In this case, it ingested everything under that directory. This worked.  I changed the index to testing so i could manage the new data a bit better while we were still testing it. One inputs.conf backup shows that i had this at some point: [monitor://F:\scripts\DataDeletion\SplunkReports\*.csv] index=testing disabled=false sourcetype=DataDeletion crcSalt = <string>   Now months later, I have changed the inputs.conf to ingest everything into the o365 index, and i have applied that change and pushed it out to the class using the Deployment server, and yet the most recent data looks different. The last events we ingested went into the testing index and looked like: This may be due to how the script is sending data into splunk, but it looks like its aggregating hundreds of separate lines into one event. My inputs.conf looks like this currently: [monitor://F:\scripts\DataDeletion\SplunkReports\*] index = o365 disabled = 0 sourcetype = DataDeletion crcSalt = <SOURCE> recursive = true #whitelist = \.csv [monitor://F:\SCRIPTS\DataDeletion\SplunkReports\*] index = o365 disabled = 0 sourcetype = DataDeletion crcSalt = <SOURCE> recursive = true #whitelist = \.csv [monitor://D:\DataDeletion\SplunkReports\*] index = o365 disabled = 0 sourcetype = DataDeletion crcSalt = <SOURCE> recursive = true #whitelist = \.csv   I am just trying to grab everything under D:\DataDeletion\SplunkReports\ on the new windows servers, and ingest all of the csv files under there, breaking up each line in the csv into a new event. What is the proper syntax for this inputs, what am i doing wrong, I have tried a few things and none of them see to work. Ive tried adding a whitelist, adding a blacklist, I have recursive and crcSalt there just to grab anything and everything.  and if the script isnt at fault at sending in chunks of data in one event, would adding a props.conf fix how Splunk is ingesting this data? Thanks for any help. 
Hey, I have a problem after upgrading to 9.1.5 from 9.0.4 (enterprise) all the dashboards that have tokenlinks.js from "simple_xml_examples" (splunk dashboard examples) app ,the latest version hav... See more...
Hey, I have a problem after upgrading to 9.1.5 from 9.0.4 (enterprise) all the dashboards that have tokenlinks.js from "simple_xml_examples" (splunk dashboard examples) app ,the latest version have the following error and the script don't work : "  A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.  " in the dev-tool F12 I sew the error comes from common.js : "  Refused to execute script from/en-US/static/@29befd543def.77/js/util/console.js' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled. common.js:1702 Error: Script error for: util/console http://requirejs.org/docs/errors.html#scripterror      at makeError (eval at e.exports (common.js:502:244924), <anonymous>:166:17)      at HTMLScriptElement.onScriptError (eval at e.exports (common.js:502:244924), <anonymous>:1689:36)  " someone have any idea why or how to fix it? Thanks! Splunk Dashboard Examples Dashboard 
Hello team, I need a query to extarct most commonly used fields by the users in a paticular dashboard. Please help me. Thanks! Renuka O
Hello, Is it possible to send alert using our sms provider? If not how can i send SMS for alerts? Thanks.
Hi all, i have a monitor stanza in inputs.conf  that monitor our organization proxy, the logs are sent by syslog-ng i have only one stanza that monitor 4 diff sources IP's from that proxy. i want... See more...
Hi all, i have a monitor stanza in inputs.conf  that monitor our organization proxy, the logs are sent by syslog-ng i have only one stanza that monitor 4 diff sources IP's from that proxy. i want to configure diff "source" to each source ip's without seeing in the value (under the source field) the name of the log. lets say the monitor path is (in the deployment server): $SPLUNK_HOME/syslog/proxy/*/*.log in the source field i will see: $SPLUNK_HOME/syslog/proxy/<proxy_source_a|b|c|d>/<proxy_date_and_time>.log i want the source to stop at proxy_source_a|b|c|d, example: $SPLUNK_HOME/syslog/proxy/<proxy_source_a|b|c|d>/ is that possible?  
Hi everyone,   I'm using Splunk SOAR and trying to send HTML emails with detailed information via the SMTP app. I would like to include images in the email and create a well-formatted HTML message ... See more...
Hi everyone,   I'm using Splunk SOAR and trying to send HTML emails with detailed information via the SMTP app. I would like to include images in the email and create a well-formatted HTML message body.   Could someone guide me on how to upload and embed images within the email?   Thanks in advance!
i have used the below query to get a list of 25 sourcetypes who are not reporting for the last 30 days ...but i need to know the volume of data ingested by them...kindly suggest any ideas or any alte... See more...
i have used the below query to get a list of 25 sourcetypes who are not reporting for the last 30 days ...but i need to know the volume of data ingested by them...kindly suggest any ideas or any alternative methods:   | metadata type=sourcetypes | eval diff=now()-lastTime | where diff > 3600*24*30 | convert ctime(lastTime) | convert ctime(firstTime) | convert ctime(recentTime) | sort -diff
Hi, I'm looking for advise how often should I upgrade Splunk Universal Forwarder - what is the best practice for this. In the https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Admin/Upgrad... See more...
Hi, I'm looking for advise how often should I upgrade Splunk Universal Forwarder - what is the best practice for this. In the https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Admin/UpgradeyourForwarders  stays: As a best practice, run the most recent forwarder version, even if the forwarder is a higher version number than your Splunk Cloud Platform environment. But is it really good practice to install the latest version? How do you do this in your environment?
One of my dashboard panel is not showing any results. But when I run the search manually it is giving results. Out of 8 panels only one panel is having the issue. I am not using base searches.
Yesterday i was ingest new server in my Splunk in my case, in directory /opt/splunkforwarder/etc/system/local/inputs.conf  im use setting like this  [monitor:///var/log/] disabled = false index = <... See more...
Yesterday i was ingest new server in my Splunk in my case, in directory /opt/splunkforwarder/etc/system/local/inputs.conf  im use setting like this  [monitor:///var/log/] disabled = false index = <NewIndex> [monitor:///home/*/.bash_history] disabled = false index = <NewIndex> sourcetype = bash_history  I ingest 6 Server Ubuntu, in the first 4hour i got too much data like 1GB (got shocked cause it only 4hours) but after 2 Days it only get 4.88GB.  What i understand is maybe in the first 4hour it read all old data cache fom .bash_history and /var/log (maybe) because when i check it in Indexer it says Earliest Event = 15 years ago Question is, it is normal or need to change in my inputs.conf ?  ~Danke  
Hi there,  I was ingest new server to new index (Ubuntu with UF)  Let say my index is index=ABC  I want to connect it in Datamodel, unfortunately im not the first who was create it. And when i che... See more...
Hi there,  I was ingest new server to new index (Ubuntu with UF)  Let say my index is index=ABC  I want to connect it in Datamodel, unfortunately im not the first who was create it. And when i check it i got error "This object has no explicit index constraint. Consider adding one for better performance."  And when i check it in macros `cim_Endpoint_indexes` it only show ()  When i want to add my new index in that macros i got this 500 server error  According from this question : https://community.splunk.com/t5/Knowledge-Management/Adding-index-to-accelerated-CIM-datamodel/m-p/586847#M8722 it said 2 solution : if you don't rebuild the DataModel, Splunk will start to add logs from that index when you  save the macro and old events aren't added to the Datamodel, only the new ones, if you rebuild the DataModel, Splunk will add to the DataModel all the events in all indexes contained in the macro until the retention period (e.g. Network Traffic 1month, Authentication 1 year, and so on). Since i know it cannot add from macros, i create new Eventtype and Tag for my new index. And that Eventtype also in Tag like this  Eventtype Tag eventtype=ABC_endpoint_event tag=endpoint, tag=asset, tag=network eventtype=ABC_process_event tag=process, tag=endpoint eventtype=ABC_network_event tag=network, tag=communication eventtype=ABC_security_event tag=security, tag=endpoint   One from base search in Datamodel Endpoint is using tag=process  (`cim_Endpoint_indexes`) tag=process tag=report | eval process_integrity_level=lower(process_integrity_level) From that query it calling tag=process  But when i try to running it, it don't show my new index.  Anyone can help me to solving this issue ?  ~Danke  
With following search getting error as Missing Closing Parenthesis in splunk. tried same rex in regex101 it was working. index=digitalguardian "appStatus" |rex ,\\"appStatus\\":\\"(?<status>\w+\s\... See more...
With following search getting error as Missing Closing Parenthesis in splunk. tried same rex in regex101 it was working. index=digitalguardian "appStatus" |rex ,\\"appStatus\\":\\"(?<status>\w+\s\w+)\\"   2024-02-21 {\"callCenterrecontactevent\":{\"customer\":{\"id\":\"6ghty678h\", \"idtypecd\":\"connect_id\"}, \"languagecd\":\"eng\",\"vhannelInstance\":: {\"status\":{\"serverStatusCode\":\"400\",\"severity\":\"Error\",\"additionalStatus\":[{\"statusCode\":400, \"appStatus\":\"Schema Validation\",\"serverity\":\"Error\"  
I have my first query which creates a list of application names that are then displayed  in multiple single value fields.  This value fields are in the first column of a larger table. | where count=... See more...
I have my first query which creates a list of application names that are then displayed  in multiple single value fields.  This value fields are in the first column of a larger table. | where count=1 | fields app   In the rest of the columns I need to put a single value field with the compliance rate of that application across multiple metrics.   What I'm looking to do is set a variable per low on data load that would allow me to ensure I pull the right compliance number for the application name.   My original idea was to hard code the compliance visualization to search for a specific application name.  However if the list of applications is to change the metric will not match the name.  So how does one set a variable on search load to be used by other visualization 
The search you requested could not be found. The search has probably expired or been deleted. Clicking "Rerun search" will run a new search based on the expired search's search string in the expire... See more...
The search you requested could not be found. The search has probably expired or been deleted. Clicking "Rerun search" will run a new search based on the expired search's search string in the expired search's original time period. Alternatively, you can return back to Splunk.
This is my query that isn't working as expected.   index=julie sourcetype!="julie:uat:user_activity" host!="julie-uat.home.location.net:8152" application_id=julie1 policy_id=framework action=sessio... See more...
This is my query that isn't working as expected.   index=julie sourcetype!="julie:uat:user_activity" host!="julie-uat.home.location.net:8152" application_id=julie1 policy_id=framework action=session_end "error_code"=9999 "*" | table julie_date_time, event_name, proxy_id, error_code, session_id, device_session_id, result |rename session_id as JulieSessionId |join type=left device_session_id [search index=julie sourcetype!="julie:uat:user_activity" host!="julie-uat.home.location.net:8152" application_id=julie1 policy_id="FETCH-DAVESESSION-ID" action=create_ticket |table timeDave, device_session_id, session_id |rename session_id as DAVESessionID] Assume Primary Query returns data like following: julie_date_time event_name proxy _id error_code Juliesession_id device_session_id result 2024-09-20T23:53:53 Login 199877 9999 1a890963 f5318902 pass 2024-09-19T08:20:00 View Profile 734023 9999 92xy9125 81b3e713 pass 2024-09-17T11:23:45 Change Profile 089234 9999 852rs814 142z7x81 pass   Requirement:  I want to add the DAVEsession_ID to the above table when the following query returns something like: timeDave event_name DAVEsession_id device_session_id 2024-09-20T23:53:50 Login 1a890963 f5318902 2024-09-19T08:19:58 View Profile 92xy9125 81b3e713 2024-09-17T11:23:40 Change Profile 852rs814 142z7x81   Expected Outcome: julie_date_time event_name proxy _id error_code Juliesession_id device_session_id result timeDave DAVEsession_id 2024-09-20T23:53:50 Login 199877 9999 1a890963 f5318902 pass 2024-09-20T23:53:50 1a890963 2024-09-19T08:19:58 View Profile 734023 9999 92xy9125 81b3e713 pass 2024-09-19T08:19:58 92xy9125 2024-09-20T23:53:53 Change Profile 089234 9999 852rs814 142z7x81 pass 2024-09-17T11:23:40 852rs814
Hi, I have setup 2 VMs in Virtual box, installed the Splunk Enterprise in Windows server 2022, and installed the universal forwarder in windows 10 VM. I have enabled listening port 9997 in Splunk E... See more...
Hi, I have setup 2 VMs in Virtual box, installed the Splunk Enterprise in Windows server 2022, and installed the universal forwarder in windows 10 VM. I have enabled listening port 9997 in Splunk Enterprise. While installing UF, I have skipped the deployment server config (let it empty), and entered the IP of Windows server machine in the receiving indexer window. Then I checked the connection from UF machine to Splunk enterprise by this PS command: Test-NetConnection -Computername xxx.xxx.x.xxx -port 9997     (Successful) and from Splunk to Universal forwarder Test-NetConnection -Computername xxx.xxx.x.xxx     (Successful) So connection is up and running between the 2 devices. But then in Splunk Enterprise, when I go to Settings > Forwarder Management, I cannot see the windows client. Same issue in Settings > Add Data > Forward "There are currently no forwarders configured as deployment clients to this instance" === > What am i doing wrong? Did i skip any configuration? Can someone help PLEASE?
so we have a rather complicated Splunk environment with an Index Cluster and about half a dozen search heads, and all that is fine and good, however I want to collect the Application logs from the Wi... See more...
so we have a rather complicated Splunk environment with an Index Cluster and about half a dozen search heads, and all that is fine and good, however I want to collect the Application logs from the Windows Event viewer on our two Splunk Deployment servers and I want that data to go into the central EventLog Index, however I do not see that as a choice in the pulldown on our two Deployment Servers like I do on our Search Head servers, and I forgot how to set that up we use Microsoft Windows 2019 to run all of our Splunks and I like to use the Web UI for as much as possible, though I aint afraid to touch the config text files, you know what I'm sayin' So in the Web UI on the Deployment servers I find this under Settings \ Data Inputs \ Local Event Log Collection and here I can select the Application, Security, and or System Event Logs just fine, however down below under the Index (Set the destination index for this source) section I only see the 15 local Indexes for that server and not those on our Index Cluster so is it wise to point the Deployment servers at our Index Cluster, and if so how do I accomplish this, or is there a better way to gather the Application log off the Deployment servers
Hi Team, We're getting skipped search alerts for all 3  Lookup Gen searches. How we can resolve this? Even though after disabling those searches we are still getting error for these searches. Your... See more...
Hi Team, We're getting skipped search alerts for all 3  Lookup Gen searches. How we can resolve this? Even though after disabling those searches we are still getting error for these searches. Your assistance is greatly appreciated. Lookup Gen - bh_sourcetype_cache broken_hosts Relation '' is unknown. (2) none 2 Lookup Gen - bh_host_cache broken_hosts Relation '' is unknown. (2) none 2 Lookup Gen - bh_index_cache broken_hosts Relation '' is unknown. (2) none
I've successfully uploaded and installed a private app to Cloud. The app simply contains a few javascript-based utilities which are located in, e.g.: common_ui_util/appserver/static/js/close_div.js ... See more...
I've successfully uploaded and installed a private app to Cloud. The app simply contains a few javascript-based utilities which are located in, e.g.: common_ui_util/appserver/static/js/close_div.js I'm hoping to use these in the same way that I'm able to from an enterprise install where, from any other app context, I'm able to include the javascript in the simple XML, e.g.:  <form version="1.1" script="common_ui_util:/js/close_div.js"> However, this isn't working for me in Cloud, and the console shows the script as 404. The console shows the path as: https://<subdomain>.splunkcloud.com/en-US/static/@<id>/app/common_ui_util//js/close_div.js   I've verified that the app is installed and that I have set permissions to read for everyone, and exported globally. Common UI Utilities common_ui_util 1.0.0 No No Global | Permissions Enabled | Disable Uninstall | Edit properties | View objects   What am I missing here?
Hi Team is there any information of when Compliance Essentials will be updated to support CMMC version 2.0, from my understanding it continues to only support 1.0 and this is impacting customers from... See more...
Hi Team is there any information of when Compliance Essentials will be updated to support CMMC version 2.0, from my understanding it continues to only support 1.0 and this is impacting customers from considering Splunk platform for their environment as there are specific needs around using Splunk platform to address CMMC compliance.