All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have multiple tables in my dashboard. I want to change the font for one particular table. I tried the below CSS but it is changing the font of all the tables in the Dashboard. #all_req_table... See more...
I have multiple tables in my dashboard. I want to change the font for one particular table. I tried the below CSS but it is changing the font of all the tables in the Dashboard. #all_req_table .table th, .table td { text-align: left !important; font-weight: bold !important; } I want the font type to bold only for table with id="all_req_table"
I have a very basic query. I want to trigger alert when count =0. Using a very basic query like : index=rxc sourcetype=rxcapp earliest=-1h@h latest=@h|stats count(eval(Code=200) as Success|fill... See more...
I have a very basic query. I want to trigger alert when count =0. Using a very basic query like : index=rxc sourcetype=rxcapp earliest=-1h@h latest=@h|stats count(eval(Code=200) as Success|fillnull value=0| table Success | where Success=0 If I am using this query in my splunk no result is returned to me as there were more than 0 success in past one hour. But when I put it on git and want my self alerted only in case if in last one hour count is 0 then it is triggering alert even though there are more than 0 counts in last one hour. Do not know what is wrong with this. Used query the below way also but still same problem: index=rxc sourcetype=rxcapp earliest=-1h@h latest=@h|stats count(eval(Code=200)) as Success| eval SuccessCount=0| eval pram = if((Count=0),"yes","no")|where pram="yes"|table SuccessCount No idea why this simple query is creating problem , can anyone help me with this, objective is very simple , to alert when count =0 in past one hour. I know this is very basic but cannot find the problem here. using something like below in github: enableSched = 1 cron_schedule = 8 */1 * * * alert.suppress = 0 alert.suppress.period = 15 realtime_schedule = 1 counttype = number of events action.email.inline = 1 action.email.sendresults = 1 action.email.to = xx action.email.subject = xx action.email.maxresults = 999 dispatch.earliest_time=-1h@h dispatch.latest_time=@h quantity = 0 relation = greater than search = index=rxc sourcetype=rxcapp earliest=-1h@h latest=@h|stats count(eval(Code=200) as Success|fillnull value=0| table Success | where Success=0
I am trying to create an alert but some issues with logging that is not standard, so each sourcetype has it's own certain fieldname for error levels (e.g. type), and another certain fieldname for the... See more...
I am trying to create an alert but some issues with logging that is not standard, so each sourcetype has it's own certain fieldname for error levels (e.g. type), and another certain fieldname for their error message (e.g. msg, or Massage)... in other words, I know the possible field values common across sourcetypes and eventually wanting to timechart or stat to cover all possible fields regardless of data source This still does not work and if a certain field does not exist, all results will be null. any ideas or a different way to do it? My query: index=t sourcetype=* (type=error OR T=error OR Level=Critical) (msg=* OR MSG=* OR Message=* OR message=* OR reason=*) |eval MM=msg."".MSGmsg."".Message."".message."".reason|timechart count by MM
Hello everyone, I signed up for the 7 days evaluation of Splunk Enterprise Security and got the credentials and link to login. However, The instance is TOO slow. It takes few minutes to open... See more...
Hello everyone, I signed up for the 7 days evaluation of Splunk Enterprise Security and got the credentials and link to login. However, The instance is TOO slow. It takes few minutes to open a page (if it opens it at all). All the other Splunk.com pages open just fine and my internet speed is good Sometimes, I receive the "503 Service Unavailable Error with this image Once I tried to open multiple pages to see if any page will open. After a few minutes I got two or three pages opened but all showed "search is waiting for input" although I added "*" where applicable and clicked on submit Now I still get problem 1 and 2, and when a page open (if at all), I errors like "unable to load results", "waiting for queued job to start", and "Search not executed: The maximum number of concurrent historical searches on this instance has been reached., concurrency_category="historical", concurrency_context="instance-wide", current_concurrency=10, concurrency_limit=10" Is there a way to fix all this?
I have a dashboard having 3 panels in a row like A B C but i want A and B panel stacked together vertically and then C separately. Is there a way
Greetings, I have Synology as host for Docker containers environment. I have Splunk installed and running fine. (Host Network) I have installed the Splunk forwarder App and docker container. (B... See more...
Greetings, I have Synology as host for Docker containers environment. I have Splunk installed and running fine. (Host Network) I have installed the Splunk forwarder App and docker container. (Bridge Network) Local port TCP <<<<<>>>>> Container Port TCP 1088 <<<<<>>>>> 8088 1089 <<<<<>>>>> 8089 1997 <<<<<>>>>> 9997 The question, about the ports being using for both Splunk and Forwarder It says per the wiki that I need to open port 9997 on both containers, which is you can since they are going to be conflicted Links: https://splunk.github.io/docker-splunk/SETUP.html#install I got already the Pi-hole log file ready and good to go /volume1/docker/Pi-hole/var/log/pihole.log And I got the forwarder input conf file ready as well /volume1/docker/Splunk/Splunk-FWD/opt/splunkforwarder/etc/system/local/inputs.conf And here is its content: [splunktcp://9997] disabled = 0 [monitor:/volume1/docker/Pi-hole/var/log/pihole.log] whitelist = pihole\.lo.+ disabled = false sourcetype = pihole:log Her as you see the port for Splunk is listening root@Synology:~# netstat -plnt | grep ':9997' tcp 0 0 0.0.0.0:9997 0.0.0.0:* LISTEN 29533/splunkd Any idea how to get it working or what if I missing something? Thanks Anas
Hi All, I want to be able to add a timestamp to each event, so that I can then perform some stats over a period of time. Ideally I'd like to come up with metrics about how the dataset has changed o... See more...
Hi All, I want to be able to add a timestamp to each event, so that I can then perform some stats over a period of time. Ideally I'd like to come up with metrics about how the dataset has changed over a period of time (up to 6 months). I guess my first step is to get the _time for every event. However when I add this, it is not showing in my output - can anyone help with this? On top of this I would love any generic help with the other piece. Query below...: | tstats values(devDeviceName) as devDeviceName values(devDeviceIp) as devDeviceIp values(devProductFamily) as devProductFamily values(devProductId) as devProductId values(matchConfidence) as matchConfidence values(matchConfidenceReason) as matchConfidenceReason WHERE ( index=xxxx-np sourcetype=psirt_details_vulnerable_v7 earliest=-2d matchConfidence="*" matchConfidenceReason!="Missing: Feature" [| `last_np_source("index=xxxx-np", "psirt_details_vulnerable_v7")`] devDeviceName!=".*" ) by deviceId, psirtColdId | fields psirtColdId deviceId devDeviceName devDeviceIp devProductFamily devProductId nextSteps matchConfidence matchConfidenceReason cv* sir psirtAdvisoryId _time | eval ss = psirtColdId."@".matchConfidence."@".matchConfidenceReason | append [search index="xxxx-np" sourcetype="device_details" | table deviceId configStatus deviceSysname swVersion ] | stats values(*) as * by deviceId | mvexpand ss | eval sss=split(ss,"@") | eval psirtColdId = mvindex(sss,0) | eval matchConfidence = mvindex(sss,1) | eval matchConfidenceReason = mvindex(sss,2) | regex devDeviceName=".*" | rex mode=sed field=devProductId "s/,.*//" | lookup xxxx-psirt_bulletins.csv psirtColdId | rex mode=sed field=deviceSysname "s/\..*$//" | makemv delim=";" matchConfidenceReason | mvexpand matchConfidenceReason | eval newMCR=if(configStatus!="Completed" and match(matchConfidenceReason, "Missing: Feature"), "Missing: Configuration", matchConfidenceReason) | fields - matchConfidenceReason | mvcombine newMCR | eval matchConfidenceReason=mvjoin(newMCR, ";") | fields deviceSysname deviceId devDeviceName devDeviceIp configStatus swVersion devProductFamily devProductId nextSteps matchConfidence matchConfidenceReason cv* sir psirtAdvisoryId bulletinFirstPublished bulletinLastUpdated bulletinMappingCaveat bulletinTitle bulletinSummary bulletinUrl _time | lookup xxxx-hardware.csv deviceId OUTPUT ps | mvexpand ps | rex field=ps "^(?.*?)::(?.*?)$" | where NOT ((psirtAdvisoryId="cisco-sa-20200205-nxos-cdp-rce" or psirtAdvisoryId="cisco-sa-20200205-fxnxos-iosxr-cdp-dos") and match(deviceSysname,"-(?:ISD|CMD|OBI|DMD)$") and match(devProductId,"^N9K")) | where NOT (match(devProductId,"9300L") and match(cveId,"(?:CVE-2017-6663|CVE-2017-6664|CVE-2017-6665|CVE-2019-1649)$")) | lookup secure_boot.csv deviceId OUTPUTNEW deviceId as secureId | where NOT (psirtAdvisoryId="cisco-sa-20190513-secureboot" and isnull(secureId)) | append [inputlookup psirt2.csv append=true | lookup devices2.csv device_type | eval zip=mvzip(affected_device,sw_version,",") | mvexpand zip | eval zip2 = split(zip,",") | eval device_name=mvindex(zip2,0) | eval sw_version=mvindex(zip2,1) | eval device_type=if(device_type="ISE","Identity Services Engine",if(device_type="SD-WAN","SD-WAN Solution",if(device_type="DNAC","DNA Center",if(device_type="DNA-S-C","DNA Spaces Connector",if(device_type="SD-WAN-R","IOS-XE SD-WAN Software",device_type))))) | eval cvss_temp_score="No temporal CVSS score available" | fields device_name sw_version advisory_id cvss_base_score cvss_temp_score last_updated first_published device_type advisory_title sir cve_id fixed_sw bcs_comments bcs_risk url | rename advisory_id as psirtAdvisoryId | rename cvss_base_score as cvssBase | rename cvss_temp_score as cvssTemporal | rename last_updated as bulletinLastUpdated | rename first_published as bulletinFirstPublished | rename device_type as devProductFamily | rename advisory_title as bulletinTitle | rename url as bulletinUrl | rename cve_id as cveId | rename device_name as deviceSysname | rename sw_version as swVersion | eval matchConfidence = "Vulnerable" | eval matchConfidenceReason = "Manual Analysis - Not mapped natively in BCS"] | table _time deviceId deviceSysname devDeviceName devDeviceIp configStatus devProductFamily devProductId productId swVersion serialNumber matchConfidence matchConfidenceReason cv* sir psirtAdvisoryId bulletinFirstPublished bulletinLastUpdated bulletinTitle bulletinUrl fixed_sw bcs_comments bcs_risk | rename sir as Severity | rename productId as childProductId Thanks, Michael
Hi, I want to show the input , drop down , checkbox and radio buttons completely in left hand side and my main dashboard data in right hand side, any idea how to do.
I have a query that produces a line chart with two plotlines; I would like to add a trend line for each line. sourcetype=OktaIM2:log (debugContext.debugData.requestUri=/app/office365/xxxxxxxxxxxxxx... See more...
I have a query that produces a line chart with two plotlines; I would like to add a trend line for each line. sourcetype=OktaIM2:log (debugContext.debugData.requestUri=/app/office365/xxxxxxxxxxxxxxxxxxxxxxx/sso/wsfed/active OR debugContext.debugData.requestUri=/app/office365/xxxxxxxxxxxxxxxxxxxxxxx/sso/wsfed/passive) AND debugContext.debugData.requestUri=office AND loginName=* | timechart span=1d count by debugContext.debugData.requestUri Thanks!
Problem:  Indexing throughput drops linearly when new data sources/forwarders/apps are added.
I created a dashboard using a report. The report creates a table of data. The filter would be a text box for purchCostReference . This is a screenshot of the table: I don't understand how to... See more...
I created a dashboard using a report. The report creates a table of data. The filter would be a text box for purchCostReference . This is a screenshot of the table: I don't understand how to connect the inputs to the table data. The text box will be an input and when entered will filter the table data using the column purchCostReference. This is the current xml <form> <label>Thru Train Dashboard</label> <fieldset submitButton="false" autoRun="true"> <input type="text" token="purchCostReferenceToken" searchWhenChanged="true"> <label>TMS Reference Number</label> <default>*</default> <initialValue>*</initialValue> </input> </fieldset> <row> <panel> <title>Thru Train XML DATA</title> <table> <search ref="ThruTrainReportNestedResults"></search> <option name="drilldown">row</option> <option name="rowNumbers">true</option> </table> </panel> </row> </form> The current report does not take parameters to filter the data. I would like to add filters to the report but do not understand how to do that. Is there a way to use the data that is displayed on the current dashboard and filter that data without changing the report? How do I add parameters to filter the data?
I have my inputs.conf setup like so: [monitor:///var/log/java] disabled = 0 index = myindex sourcetype = metrics_csv whitelist = metrics.*.csv CRCSALT = <SOURCE> But even though each filename ... See more...
I have my inputs.conf setup like so: [monitor:///var/log/java] disabled = 0 index = myindex sourcetype = metrics_csv whitelist = metrics.*.csv CRCSALT = <SOURCE> But even though each filename is UNIQUE with a timestamp, etc, I still get the error that the "File will not be read"! Thoughts? ERROR TailReader - File will not be read, is too small to match seekptr checksum (). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source.
I am trying to create a simple dashboard to track active site using a query like the one below. The query works and returns values but my supervisor has asked me now to add background color to the va... See more...
I am trying to create a simple dashboard to track active site using a query like the one below. The query works and returns values but my supervisor has asked me now to add background color to the values in the dashboard. The query below basically checks to see if a service is running on a set of servers. If the service is running on server A it returns Town Name 1, if it is running on server B it returns Town Name 2. If neither is found it returns "Down". I have tried using Single Value visualization and Status Indicator Visualization but both require a numeric value to use the OTB color formatting. How can i get the background color to change based off the text values Town1, Town2, and Down? index=windows source=service host=servername* Name=service_name* earliest=-5m State="Running" | eval Site=if(host="server1", "Town1", if(host="server2","Town2","Down")) | dedup Site | table Site
I have one indexer that is receiving events from a remote Windows host via the Universal Forwarder. I am trying to filter out events that contain the string 'empty logger' in the log file D:\Logs\... See more...
I have one indexer that is receiving events from a remote Windows host via the Universal Forwarder. I am trying to filter out events that contain the string 'empty logger' in the log file D:\Logs\Test\testlog5_29_20.log file on the remote server. I have attempted to use the props.conf and the transforms.conf files on the indexer to send the events matching the regex to nullqueue, but the events in question are still making it. I am suspecting that the source stanza in the props.conf file isn't correct, as I am specifying a directory that only exists on the remote Windows hosts. Am I correct in that assumption?
Hi, Can someone please help me regex a password field to mask data? I've been trying to figure out how to mask the password in the following example; npx violation-comments-to-cloud-command-l... See more...
Hi, Can someone please help me regex a password field to mask data? I've been trying to figure out how to mask the password in the following example; npx violation-comments-to-cloud-command-line -username JoeSmith@company.com -password abcdef78 -ws walace -rs ttcc-lsls -prid 1441 -v CHECKSTYLE . '.*/reports/filename-goes-here-results.xml$' ESLint -keep-old-comments true -www1 true I've tried many variations but it either deletes the remainder of the event or doesn't work. [password-anonymizer] REGEX =(?m)^(-password\s).*$ FORMAT = $1######## DEST_KEY = _raw Thanks
We are looking to hide some RealTime search panels on a dashboard while it shows "Waiting for data..". Is this possible?
I am trying to just set up a basic encryption between the Universal Forwarder and indexer using the certs that come with the install. I am trying to follow the directions on this Splunk doc but am ru... See more...
I am trying to just set up a basic encryption between the Universal Forwarder and indexer using the certs that come with the install. I am trying to follow the directions on this Splunk doc but am running into issues: https://docs.splunk.com/Documentation/Splunk/8.0.3/Security/ConfigureSplunkforwardingtousethedefaultcertificate On the inputs.conf for the indexer found under C:\Program Files\Splunk\etc\system\local on my Splunk server I added this stanza: [SSL] serverCert = $SPLUNK_HOME/etc/auth/server.pem sslPassword = password requireClientCert = false Then on the outputs.config for the UF found under C:\Program Files\SplunkUniversalForwarder\etc\system\local on one of my servers I have this for the config: [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = [SplunkServerNameHere]:9997 clientCert = $SPLUNK_HOME/etc/auth/server.pem sslPassword = password sslVerifyServerCert = false [sslConfig] caCertFile = cacert.pem caPath = $SPLUNK_HOME\etc\auth [tcpout-server://[SplunkServerNameHere]:9997] I then reset both the Splunk server and UF and found logs were still getting ingested into the indexer with no issues except from the UF that I was setting up to use an encrypted connection. It worked with no issue prior to configuration change but its traffic was getting rejected after the UF was reset. I looked at the splunkd.log file on the Splunk server and found this error: ERROR TcpInputProc - Message rejected. Received unexpected message of size=369295616 bytes from src=[ClientIPHere]:60167 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload.
I am trying to get a Python script to run after a search returns a username. The search returns one username after doing a few checks (works great). The script will add a user to an AD group (wor... See more...
I am trying to get a Python script to run after a search returns a username. The search returns one username after doing a few checks (works great). The script will add a user to an AD group (works great). My issue is now that run a script function is deprecated, and I can't find proper documentation about passing the event field into a Python argument to run. My Python script is saved in $SPLUNK_HOME$/bin/scripts .
When I select the Inputs panel, it gets stuck at Loading. I've worked around this on the back end, but I would like to ensure this works for my prod box. Any suggestions?
Got the Box Add-on working, now for reporting, but cannot find the Box App for Splunk? Is there replacement?