All Topics

Top

All Topics

Hi all, I am experiencing an issue with the Splunk WinHostInfo input. It is not working after being deployed to the universal forwarder, whereas the other logs from the same device are successfully ... See more...
Hi all, I am experiencing an issue with the Splunk WinHostInfo input. It is not working after being deployed to the universal forwarder, whereas the other logs from the same device are successfully received. Does anyone have any idea or suggestions on how to resolve this?
Hello, I'm pretty new with splunk and downloaded a couple of apps to get some dashboards going, including Security Essentials. I installed the app through splunk web and then opened the app after a r... See more...
Hello, I'm pretty new with splunk and downloaded a couple of apps to get some dashboards going, including Security Essentials. I installed the app through splunk web and then opened the app after a restart and for many of the pages in SSE, a bunch of javascript errors are thrown. They all say "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details." Here are the error logs from the console. Not sure how to proceed.  
Iam trying to align values on X-axis in this order : ">3 days" ">5 days" ">15 days" ">30 days" ">100 days" I have tried table command but its not giving me the expected output. query: |inputlook... See more...
Iam trying to align values on X-axis in this order : ">3 days" ">5 days" ">15 days" ">30 days" ">100 days" I have tried table command but its not giving me the expected output. query: |inputlookup acn_ticket_unresolved_dessertholdings_kv | eval age=((now() - epoc_time_submitted)/86400),total_age=round(age,2) |rangemap field=total_age ">3 days"=0-3.00 ">5 days"=3.01-15.00 ">15 days"=15.01-30.00 ">30 days"=30.01-100.00 ">100 days"=100.01-1000.00 | chart count as count1 over range by priority | rename priority as Priority  
I have a search query statistical result values in the below format Login mode Total login xxx 48 Yyyy 23 aaa 52 bbbb 73   Now I need to display a bar ch... See more...
I have a search query statistical result values in the below format Login mode Total login xxx 48 Yyyy 23 aaa 52 bbbb 73   Now I need to display a bar chart which shows the login in respective of the login mode and the time selection in the query   for example:  
Hello, How to align single value text to left in Dashboard Classic? I tried to use text-align and float, but it didn't work. Please suggest Thank you  <panel id="DisplayPanel"> <single... See more...
Hello, How to align single value text to left in Dashboard Classic? I tried to use text-align and float, but it didn't work. Please suggest Thank you  <panel id="DisplayPanel"> <single id="datestyle"> <search>   <query>| makeresults       | addinfo      | eval text = "How to align this text to left?"      | table text    </query> </search> </single> </panel> <panel depends="$alwaysHideCSS$"> <title>Single value</title> <html> <style> #DisplayPanel { width: 100% !important; font-size: 16px !important; text-align: left !important; float: left; } </style> </html> </panel>
Hi frends   I have logs like _time=time latitude=1 longitude=-1 other fields ... _time=time latitude=1 longitude=-2 other fields ... Etc The objective is to translate the latitude and longitude... See more...
Hi frends   I have logs like _time=time latitude=1 longitude=-1 other fields ... _time=time latitude=1 longitude=-2 other fields ... Etc The objective is to translate the latitude and longitude values ​​to their associated countries and then apply filters on these values ​​but I only have these values.   Hope can help me.   Regards  
Is it possible to attach two DUO consoles to the splunk API - we have a standard console and are soft migrating to DUO Federal, and would like visibility/ ingestion for both in Splunk:  We see the op... See more...
Is it possible to attach two DUO consoles to the splunk API - we have a standard console and are soft migrating to DUO Federal, and would like visibility/ ingestion for both in Splunk:  We see the option to edit the DUO Splunk connector, not add a second one.  Thank you! 
only 1 of my indexer having this issue  during upgrade. firstly , i use domain account to install  Then encounter this issue : Splunk Enterprise Setup Wizard ended prematurely Splunk Enterpri... See more...
only 1 of my indexer having this issue  during upgrade. firstly , i use domain account to install  Then encounter this issue : Splunk Enterprise Setup Wizard ended prematurely Splunk Enterprise Setup Wizard ended prematurely because of an error.  Your system has not been modified.  To install this program at a later time, run Setup Wizard again.  Click the Fiinish button to exit the Setup Wizard.   Setup cannot copy the following files:  Splknetdrv.sys SplunkMonitorNoHandleDrv.sys SplunkDrv.sys after then i refer to this KB : https://community.splunk.com/t5/Installation/Why-does-Splunk-upgrade-to-version-9-1-0-1-end-prematurely/m-p/652791 follow this step : Solution: Install Splunk from the command line and use the LAUNCHSPLUNK=0 flag to keep Splunk Enterprise from starting after installation has completed. For example : PS C:\temp> msiexec.exe /i splunk-9.0.4-de405f4a7979-x64-release.msi LAUNCHSPLUNK=0 You can complete the installation, and before running SPLUNK, you need to grant the user "Full Control" permissions to the Splunk Enterprise installation directory and all of its subdirectories. Splunk upgraded succesufully to 9.1.2 but not able to start . i changed to local admin and try repair but yet still hit  same error : Setup cannot copy the following files:  Splknetdrv.sys SplunkMonitorNoHandleDrv.sys SplunkDrv.sys my splunk services still not able to start and cant see any error in logs. The Splunk services are still unable to start, and there are no apparent errors in the logs. Can anyone provide assistance with this issue?  
Have a simple dashboard filtering logs by AppID's and related Servername. First dropdown search defaults to "*" for all AppID's but search obtains all AppID's which you can select and has a token ... See more...
Have a simple dashboard filtering logs by AppID's and related Servername. First dropdown search defaults to "*" for all AppID's but search obtains all AppID's which you can select and has a token for $AppID$ eg. "index="applogs" sourcetype="logs:apps:inventory" | table AppID | dedup AppID | sort AppID" Second dropdown searches by $AppID$ token of First dropdown, to get the list of Servernames returned for selected AppID eg. "$AppID$" index="syslogs" sourcetype="logs:servers:inventory" | eval Servername = host."\\".InstanceName | table AppID Servername | dedup Servername | sort Servername This has a token for $Servername|s$ (escape chars in server name), which gets added to a bunch of search panels. For example, select App49 in first dropdown, and it returns ServerA, ServerB, ServerC, ServerD in the second dropdown. Selecting ServerA, B, C or D in the second dropdown then searches bunch of panels filter by that Servername token. Thats all working fine, but by default I want the option to search all panels by all $Servername$ options in the second dropdown related to the selected AppID. Adding a "*" wildcard option in second dropdown as in the first, just returns all Servernames, not the ones filtered by the $AppID$ token. How can I default my second drop down to an "All" option that does this? eg. searches all panels by all the results that get populated in the second dropdown from the $AppID$ of the first?
Hi, We are ingesting data into Splunk Cloud to below index: ------------- index=zscaler source=firewall -------------   Is there a way we can forward this (from Slunk Cloud) to Trend Micro's HTT... See more...
Hi, We are ingesting data into Splunk Cloud to below index: ------------- index=zscaler source=firewall -------------   Is there a way we can forward this (from Slunk Cloud) to Trend Micro's HTTPS API or a TCP stream?   Thanks in advance for any help!
I am working on migrating from Centos 7 to Ubuntu 22. Single search head, indexer cluster (3 indexers), and a deployment server used just to manage clients (not Splunk servers). For the SH and DS is... See more...
I am working on migrating from Centos 7 to Ubuntu 22. Single search head, indexer cluster (3 indexers), and a deployment server used just to manage clients (not Splunk servers). For the SH and DS is it just a straightforward install same version Splunk on new Ubuntu server, copy config over, check permissions, and start it up (same IP, same DNS)? For the IDX cluster, do a new CM first and copy config over or are there other things to consider? What's a good process for the indexers (only 3). Can I build new indexers on Ubuntu, add them to the cluster, and then remove the CentOS servers as new Ubuntu servers are added all the while letting clustering handle the data management?
Hi Folks,  lately MC started behaving little wired, after performing investigation whenever SOC analyst trying to reduce the risk score of an object, user sometimes instead of reducing the risk scor... See more...
Hi Folks,  lately MC started behaving little wired, after performing investigation whenever SOC analyst trying to reduce the risk score of an object, user sometimes instead of reducing the risk score it's creating a double entry, please have a look in the image attached.
Hello,   I need to monitor log files that are in the following directory('s'):   "c:\users\%username%\appdata\local\app\$randomnumber$\app.log" %username% is whoever is currently logged on (but ... See more...
Hello,   I need to monitor log files that are in the following directory('s'):   "c:\users\%username%\appdata\local\app\$randomnumber$\app.log" %username% is whoever is currently logged on (but I suppose I'd be ok with "*", any user folder) and $randomnumber$ is a unique ID that's going to always be different for every desktop, possibly change over time, and possibly be more than one folder for a given user. How would I make the file monitor stanza in inputs.conf do that?   Thanks!
I have a saved search that runs every day and does a partial fit over the previous day. I'm doing this because I need 50 days' of data for the cardinality to be high enough to ensure accuracy. Howeve... See more...
I have a saved search that runs every day and does a partial fit over the previous day. I'm doing this because I need 50 days' of data for the cardinality to be high enough to ensure accuracy. However, I don't want over 60 days' of data. How do I build up to 50 days' of data in the model but then roll off anything over 60 days?   Thanks!
Hi All, The data checkpoint file for windows logs is taking up a lot of disk space (over 100 GB). Where can I check the modular input script. We are having issues of full disk space due to this. ... See more...
Hi All, The data checkpoint file for windows logs is taking up a lot of disk space (over 100 GB). Where can I check the modular input script. We are having issues of full disk space due to this. How can I exclude the modinput for one of the checkpoint on particular servers? An example windows log event is as following: \powershell.exe (CLI interpreter), Pid: 12345,\OSEvent: (Source: (Uid: xxxxxxxxx, Name: splunk-winevtlog.exe, Pid: 123123, Session Id: 0, Executable Target: Path: \Device\HarddiskVolume4\Program Files\SplunkUniversalForwarder\var\lib\splunk\modinputs\WinEventLog\Application) Any help would be appreciated! Thanks in Advance!
Good Afternoon, My leadership informed me that CrowdStrike is sending our logs to Splunk. Has anyone done any queries to show when a device is infected with malware? I don't know the CrowdStrik... See more...
Good Afternoon, My leadership informed me that CrowdStrike is sending our logs to Splunk. Has anyone done any queries to show when a device is infected with malware? I don't know the CrowdStrike logs, but I'm hoping someone here can give me some guidance to get started. 
Hello All, I am using Splunk as Datasource and trying to build dashboards in Grafana (v10.2.2 on Linux OS). Is there anything in Grafana wherein I do not have to write 10 queries in 10 panels. Jus... See more...
Hello All, I am using Splunk as Datasource and trying to build dashboards in Grafana (v10.2.2 on Linux OS). Is there anything in Grafana wherein I do not have to write 10 queries in 10 panels. Just one base query will fetch data from Splunk and then in Grafana I can write additional commands or functions which will be used in each panel on top of the base query, so Splunk load is reduced. Similar to “Post process search” in Splunk. Post Process Searching - How to Optimize Dashboards in Splunk (sp6.io) I followed below instructions and able to fetch data in Splunk but it causes heavy load and stops working next day and all the panels shows “No Data”. Splunk data source | Grafana Enterprise plugins documentation Your help will be greatly Appreciated! Thanks in Advance!
Hi Team  I tried the below search but not getting any result,  index=aws component=Metrics group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | for... See more...
Hi Team  I tried the below search but not getting any result,  index=aws component=Metrics group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024, 3)]      
I was wondering if anyone knew where I could find it either in the logs or even better the audit REST endpoint if an automation account regenerates it's auth-token.     I've looked through the audi... See more...
I was wondering if anyone knew where I could find it either in the logs or even better the audit REST endpoint if an automation account regenerates it's auth-token.     I've looked through the audit logs but I haven't seen an entry for it.     Any leads or tips would be appreciated.    Thank you
Hello! I wanted to ask what is the best way/configuration to get network device logs directly into splunk? Thanks in advance!