All Topics

Top

All Topics

Hi there, I have a <html> <style> block defined for a table in my dashboard. It is defining a custom column width. It works when I only have a limited number of columns in my table (See "Table 1" i... See more...
Hi there, I have a <html> <style> block defined for a table in my dashboard. It is defining a custom column width. It works when I only have a limited number of columns in my table (See "Table 1" in the example dashboard code below) However when I add additional columns it stops working (see "Table 2").  I have searched high and low for a solution to this with no luck. Anyone got any ideas? Here is an example dashboard that demonstrates the issue - the cveID column is the focus.   <form version="1.1" theme="dark" hideFilters="true"> <label>custom table column width not working</label> <row> <panel depends="$alwaysHideCSSPanel$"> <html> <style> #customWidth1 th[data-sort-key=cveID] { width: 50% !important; } </style> </html> </panel> <panel> <table id="customWidth1"> <title>Table 1</title> <search> <query> | makeresults | table _time Hostname instance_id Remediation Description Severity ExploitStatus DaysOpen OldestCVEDays cveID</query> </search> </table> </panel> </row> <row> <panel depends="$alwaysHideCSSPanel$"> <html> <style> #customWidth2 th[data-sort-key=cveID] { width: 50% !important; } </style> </html> </panel> <panel> <table id="customWidth2"> <title>Table 2</title> <search> <query> | makeresults | table _time Hostname instance_id Remediation Description Severity ExploitStatus DaysOpen OldestCVEDays OpenVulnCount DetectionLogic VendorRef VendorAdvisory Link cveID OldestCVEDate Application Version OldestCVEDate Application Version FirstSeen InstanceName amiName amiID awsOwner awsApplication Account_id AccountName DNSDomain OS IPAddress Team OU Site Environment Location OSGroup OSType</query> </search> </table> </panel> </row> </form>    
I can't see the events log when I searching on splunk enterprise server. But I already check the splunk server status is running and I created the index = linux_universal_forwarder and host = linux_u... See more...
I can't see the events log when I searching on splunk enterprise server. But I already check the splunk server status is running and I created the index = linux_universal_forwarder and host = linux_uf_1 into inputs.confg on forwarder linux machine. And I also created the receiving new port 8889 and new index = linux_universal_forwarder on splunk server. Why I can't see the logs ? I can able to ping between indexer and forwarder. Pls help how to fix this issue? how to troubleshoot step by step? I'm beginner to learn the Splunk. Thank you.  
Hi, I am trying to setup an alert and notify by email, when count of last 3hrs is greater than rolling average of last 7 days using the below query. Query is working fine but in the alert is not wor... See more...
Hi, I am trying to setup an alert and notify by email, when count of last 3hrs is greater than rolling average of last 7 days using the below query. Query is working fine but in the alert is not working/not getting triggered I tried as below Alert Config Trigger conditions in alert Screen are, Trigger alert when ,Custom option ,search alert==true   Query: sourcetype="cloudwatch" index=***** earliest=-6d@d latest=@d |bucket _time span=1d |stats count by _time |stats avg(count) as SevenDayAverage |appendcols [search sourcetype="cloudwatch" index=***** |stats count as IndividualCount] |eval alert = if((IndividualCount.SevenDayAverage),"true","false") SevenDayAverage IndividualCount alert 5 1139 true
Hi, I am new to splunk, could you please help me with below SPL, I am trying to use stats and table command We have 4 entries for same incident, I need to pick earliest time. Index="monitoring" so... See more...
Hi, I am new to splunk, could you please help me with below SPL, I am trying to use stats and table command We have 4 entries for same incident, I need to pick earliest time. Index="monitoring" sourcetype="tool" incident_id=INC* | stats earliest(_time) as early | table "mc_host" "incident_id" "early" | convert ctime(early)   I am getting error if execute.    
Hello, When trying to install this from my Splunk Enterprise (In my Windows10 client) I'm getting: Unable to initialize modular input "oci_logging" defined in the app "TA-oci-logging-addon": Intros... See more...
Hello, When trying to install this from my Splunk Enterprise (In my Windows10 client) I'm getting: Unable to initialize modular input "oci_logging" defined in the app "TA-oci-logging-addon": Introspecting scheme=oci_logging: script running failed (exited with code 1)..
Hello Splunkers !! I am getting below while executing backfill summary index command in my Splunk machine.  Anyone can suggest me what will be the issue and where I need to correct Below command I ... See more...
Hello Splunkers !! I am getting below while executing backfill summary index command in my Splunk machine.  Anyone can suggest me what will be the issue and where I need to correct Below command I was executed : splunk cmd python fill_summary_index.py -app commissioning_reports -name "scada_alarms_start_base" -et -7d -lt now -index si_error -showprogress true -j 8 -dedup true -owner admin -auth admin:!Splunk001 Screenshot for the error:      
where can i find all the Splunk queries and how to use them?
hello everyone we have Splunk_TA_nix version 8.7.0 ,some of scripts dose not work properly such as lsof.sh,rlog.sh ,... also sometimes after the times scripts dose not work properly and it's log wa... See more...
hello everyone we have Splunk_TA_nix version 8.7.0 ,some of scripts dose not work properly such as lsof.sh,rlog.sh ,... also sometimes after the times scripts dose not work properly and it's log was interupted. can anyone helps me? 
How to perform lookup in CSV file from index without combining data in one row (and without mvexpand)? | index=vulnerability_index | table ip, vulnerability, score ip vulnerability score ... See more...
How to perform lookup in CSV file from index without combining data in one row (and without mvexpand)? | index=vulnerability_index | table ip, vulnerability, score ip vulnerability score 192.168.1.1 SQL Injection 9 192.168.1.1 OpenSSL 7 192.168.1.2 Cross Site-Scripting 8 192.168.1.2 DNS 5 CSV file:   company.csv ip_address company location 192.168.1.1 Comp-A Loc-A 192.168.1.2 Comp-B Loc-B 192.168.1.5 Comp-E Loc-E Lookup in CSV from index | index=vulnerability_index | lookup company.csv ip_address as ip OUTPUTNEW ip_address, company, location The vulnerability and score merged into one row. ip_address company location vulnerability score 192.168.1.1 Comp-A Loc-A SQL Injection OpenSSL 9 7 192.168.1.2 Comp-B Loc-B Cross Site-Scripting DNS 8 5 How do I match the ip_address in a separate row without using mvexpand? MVExpand has memory limitation and is very slow. It will not work in a large data set and time frame My expected result is: ip_address company location vulnerability Score 192.168.1.1 Comp-A Loc-A SQL Injection 9 192.168.1.1 Comp-A Loc-A OpenSSL 7 192.168.1.2 Comp-B Loc-B Cross Site-Scripting 8 192.168.1.2 Comp-B Loc-B DNS 5 Thank you for your help
I've got a feed that is sending non-compliant json since spath doesn't work on it.  I put together this search index=dlp sourcetype=sft:json "{" | head 1 | eval data='{"time": "2023-07-21T19:10:... See more...
I've got a feed that is sending non-compliant json since spath doesn't work on it.  I put together this search index=dlp sourcetype=sft:json "{" | head 1 | eval data='{"time": "2023-07-21T19:10:48+00:00", "pid": 24086, "msec": 1689966648.059, "remote_addr": "aaa.bbb.ccc.ddd", "request_time": 0.005, "host": "sitename.noname.org", "remote_user": "-", "request_filtered": "GET /healthz HTTP/1.1", "status": 200, "body_bytes_sent": 13, "bytes_sent": 869, "request_length": 72, "http_referer_filtered": "", "http_user_agent": "-", "http_x_forwarded_for": "-", "context": "973235423dccda96a385ca21c133891632a28d91"}' | spath input=data I'm not seeing any value for data, thus nothing for the spath.  Do I need to do something special to the eval to get it to process? TIA, Joe
I have a script that I am generating a json formatted log file entries. I want to get this data into Splunk. What is the best way to write the data to disk to be monitored and ingested? Should I ju... See more...
I have a script that I am generating a json formatted log file entries. I want to get this data into Splunk. What is the best way to write the data to disk to be monitored and ingested? Should I just append json data into a single file, or should the log file have only one entry at a time, and I overwrite/clear the file each time I need to add new data?  
We generally follow a pattern of logging in a key=value pattern. I am curious if we should totally avoid logs that are not in that format. Is it not recommended to have logs like:       log.info... See more...
We generally follow a pattern of logging in a key=value pattern. I am curious if we should totally avoid logs that are not in that format. Is it not recommended to have logs like:       log.info("Flushing kafka buffer before callback.");        
In my splunk environment I have access to the infrastructure overview which shows if an entity is stable, unstable, or inactive. I would like to know if there is a way to put these statuses for indiv... See more...
In my splunk environment I have access to the infrastructure overview which shows if an entity is stable, unstable, or inactive. I would like to know if there is a way to put these statuses for individual hosts in a Splunk Dashboard so I dont have to filter through them in the infrastructure overview.    Thanks
I have a lookup that is mapping action, category, attributes and more fields for windows event codes. However for each event code not all the column have values.  EventCode, action, category, attr... See more...
I have a lookup that is mapping action, category, attributes and more fields for windows event codes. However for each event code not all the column have values.  EventCode, action, category, attr, ..... 1,allow,,xyx,,, 2,fail,firewall,,....   When I add this to the transforms and props.conf and deploy it out to splunk cloud it is creating fields even when it is empty for that match.  Is there a way to make sure that the null values are not getting outputted using props and transforms.conf ?
Dear All, I have observed License usage for one of the sourcetype is high capmpare to privious days. However events count is low capmpare privious days . How to check this in splunk , how to valida... See more...
Dear All, I have observed License usage for one of the sourcetype is high capmpare to privious days. However events count is low capmpare privious days . How to check this in splunk , how to validate the licence utilization.   I.e. :  Sourcetype: Cisco: asa  12 July'23 - Eventcount:16819087, license usage : 21GB 14 July'23 - Eventcount:15722874, license usage : 42 GB
how download the ova of splunk soar for vmware  
Hello everyone, I'm encountering an issue with the web interface for the deployment instance. When I attempt to access it, it prompts me for a username and password, but after entering the credentia... See more...
Hello everyone, I'm encountering an issue with the web interface for the deployment instance. When I attempt to access it, it prompts me for a username and password, but after entering the credentials, it gets stuck on a loading gif that keeps spinning indefinitely. Upon inspecting the network tab, I noticed there's only one resource returning a 404 error: https://my-deploy-instance:8000/en-US/splunkd/__raw/services/dmc-conf/settings/settings   In the past, I have successfully performed several upgrades for Splunk by provisioning a new instance on my cloud, adding my configuration files, and starting the process. However, in my latest attempt, I encountered issues where the process doesn't seem to work correctly anymore. To ensure safety and avoid disruptions on production instances, I adopted a technique of provisioning a separate stack for this work and testing it independently. Strangely, on this new stack, I am encountering license-related complaints, with it stating that the license is revoked. This is puzzling because the license used is exactly the same as the one used in the production version, which is functioning without any problems.   ERROR LMStack - failed to load license from file: splunk.license, err - The license file with signature XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) has been revoked. Please file a case online at http://www.splunk.com/page/submit_issue I'm wondering if there have been any changes in this context that might be causing the issue. Could someone kindly advise if I am overlooking something in this process?
The errors indicates that there are some python libraries that are missing. We managed to fix couple of issues related to libraries, this keeps repeatedly failing for multiple library files. Is t... See more...
The errors indicates that there are some python libraries that are missing. We managed to fix couple of issues related to libraries, this keeps repeatedly failing for multiple library files. Is there any specific python related configurations to be installed so as to run the jobs from PSA? Encountering the below error in the scheduled jobs on the controller as shown below.
Hi I was wondering on a dashboard if you could click on an item and it shows all the information for that single instance of events. Through drilldowns or is there another way of doing it .  
Hello, I have an alert that sends an email when there are x authentication failures , this works fine and returns user,count - but I'd like to also include a table that contains the below fields wh... See more...
Hello, I have an alert that sends an email when there are x authentication failures , this works fine and returns user,count - but I'd like to also include a table that contains the below fields when the alert trips, how can we go about doing that? user,src_ip,count   current alert: index=main action=fail* OR action=block* (source="*WinEventLog:Security" OR sourcetype=linux_secure OR tag=authentication) user!="" |  stats count by user src _time | stats sum(count) as count by user | where count>200 | sort - count