All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Please help me load jquery-ui into a dashboard xml Also, can i load the jquery-ui css inside the require.conf? in the browser console, i'm getting this error:  JQuery Version: 3.6.0 VM4289:50 U... See more...
Please help me load jquery-ui into a dashboard xml Also, can i load the jquery-ui css inside the require.conf? in the browser console, i'm getting this error:  JQuery Version: 3.6.0 VM4289:50 Uncaught TypeError: Cannot read properties of undefined (reading 'ui') at eval (eval at <anonymous> (dashboard.js:1276:187236), <anonymous>:50:20) at Object.execCb (eval at module.exports (dashboard.js:632:662649), <anonymous>:1658:33) at Module.check (eval at module.exports (dashboard.js:632:662649), <anonymous>:874:51) at Module.eval (eval at module.exports (dashboard.js:632:662649), <anonymous>:1121:34) at eval (eval at module.exports (dashboard.js:632:662649), <anonymous>:132:23) at eval (eval at module.exports (dashboard.js:632:662649), <anonymous>:1164:21) at each (eval at module.exports (dashboard.js:632:662649), <anonymous>:57:31) at Module.emit (eval at module.exports (dashboard.js:632:662649), <anonymous>:1163:17) at Module.check (eval at module.exports (dashboard.js:632:662649), <anonymous>:925:30) at Module.enable (eval at module.exports (dashboard.js:632:662649), <anonymous>:1151:22)       require.config({ waitSeconds: 0, paths: { 'localjquery':'/static/app/myapp/lib/jquery.min', 'jqueryui':'/static/app/myapp/lib/jquery-ui.min' }, shim: { 'jqueryui': { deps: ['localjquery'] } } }); require([ // 'splunkjs/ready!', 'underscore', 'backbone', 'localjquery', 'splunkjs/mvc', 'jqueryui', 'splunkjs/mvc/simplexml/ready!' ], function (_,Backbone, $, mvc) { defaultTokenModel = mvc.Components.get("default"); console.log("JQuery Version:"); console.log(jQuery().jquery); console.log("JQuery-UI Version:"); console.log($.ui.version); });       Dashboard     <dashboard script="input_slider_range.js" stylesheet="lib/jquery-ui.min.css"> <label>slider range</label> <row> <panel> <html> SLIDER <p> <label for="amount">Price range:</label> <input type="text" id="amount" readonly="true" style="border:0; color:#f6931f; font-weight:bold;" /> </p> <div id="slider-range"></div> </html> </panel> </row> </dashboard>      
Hello, My goal is to create a ping test for several cameras we have onsite. I'm looking for advice on this issue. We are using a software called Genetec for our cameras (not sure if this can be int... See more...
Hello, My goal is to create a ping test for several cameras we have onsite. I'm looking for advice on this issue. We are using a software called Genetec for our cameras (not sure if this can be integrated with Splunk nor if it is completely necessary).  Details: Currently I have Splunk Cloud (3 on-premise Hosts AWS) Cameras that connect to a physical router at our office. I have access to the physical router used by the cameras.  Private IPs within a VLAN.  I have access to the cameras' private IPs and can ping them when connected to our VPN.  Genetec software used for our cameras.  Goals: Create Alerts when the cameras go down via a ping test. Possibly create a dashboard showing each's camera's availability (meaning if it is on or off)  
Hello peeps, Currently I have a list of processing times. And I am trying to create a dashboard that shows the average time, max time, and the count of how many times that action is processed. inde... See more...
Hello peeps, Currently I have a list of processing times. And I am trying to create a dashboard that shows the average time, max time, and the count of how many times that action is processed. index=IndexA | stats avg(cTotal) max(cTotal) count(cTotal) by name | sort 10 -count(cTotal) Totaltime is a given header and an example is  When I run the script, I get accurate results for count and max, but the avg is not currently working. I'm thinking it's because of the time string (hours, min, sec, milisec) but if anyone has any advice on how to make the average work, I would love to hear it!
We are trying to filter out events from a Syslog server that is ingesting data for a number of sources but the one we are trying to filter is from our Meraki devices.  Each Meraki is considered a sou... See more...
We are trying to filter out events from a Syslog server that is ingesting data for a number of sources but the one we are trying to filter is from our Meraki devices.  Each Meraki is considered a source and the sourcetype is meraki.  This is a sample of the events coming into Splunk: 2022-07-08 07:14:51.427 xxx.xxx.xxx.xxx 1 Location_XXX flows src=xxx.xxx.0.1 dst=8.8.8.8 mac=70:D3:79:XX:XX:XX protocol=icmp type=8 pattern: allow icmp host = xxx.xx.0.2source = /syslog0/syslog/meraki/xxx.xx.0.2/messages.log sourcetype = meraki There are more than 100 sources all using the format:  /syslog0/syslog/meraki/<IP Address>/messages.log How can I put that source in props.conf without listing each one separately? 
We recently upgraded our on prem Splunks to version 9.0.0 and now any time we click on our home grown Dashboards we get this error: This dashboard version is missing. Update the dashboard version ... See more...
We recently upgraded our on prem Splunks to version 9.0.0 and now any time we click on our home grown Dashboards we get this error: This dashboard version is missing. Update the dashboard version in source   we were on an older version of Splunk 8.x for a while so no idea when this has changed  
Hi,  I have two event fields with the same name "timestamp". I just want to display (in stats) the "timestamp" field from the "ResponseReceive" logEventType. Not the one from logType "SystemLog". C... See more...
Hi,  I have two event fields with the same name "timestamp". I just want to display (in stats) the "timestamp" field from the "ResponseReceive" logEventType. Not the one from logType "SystemLog". Currently is displays both.  Is there a way to do this? Any assistance is appreciated. Thank you!! ... | fields timestamp, apiName, apiVersion, ceoCompanyId, entityId, sessionId, transactionDetailsResponse.transactionDetailsList.totalCount, transactionDetailsResponse.transactionDetailsList.transactionDetails{}.acctNumber, transactionDetailsResponse.transactionDetailsList.transactionDetails{}.Amount, transactionDetailsResponse.transactionDetailsList.transactionDetails{}.tranDateTime, transactionDetailsResponse.transactionDetailsList.transactionDetails{}.totalTranCount | rename transactionDetailsResponse.transactionDetailsList.totalCount AS "TransactionCount", transactionDetailsResponse.transactionDetailsList.transactionDetails{}.acctNumber AS "AcctNum", transactionDetailsResponse.transactionDetailsList.transactionDetails{}.Amount AS "Amount", transactionDetailsResponse.transactionDetailsList.transactionDetails{}.tranDateTime AS "TranDateTime", transactionDetailsResponse.transactionDetailsList.transactionDetails{}.totalTranCount AS "TotalTranCount" | stats values(timestamp) AS timestamp, values(TranDateTime) AS TranDateTime, values(apiName) AS apiName, values(apiVersion) AS apiVersion, values(ceoCompanyId) AS ceoCompanyId, values(entityId) AS entityId, values(TotalTranCount) AS TotalTranCount, values(AcctNum) AS AcctNum, by sessionId,    
Hi All, I have this report    My requirement is only show in table those event that do not have the Plugin Name = "TLS Version 1.1 Protocol Deprecated"  with the Port= 8443 OR =8444 as I ha... See more...
Hi All, I have this report    My requirement is only show in table those event that do not have the Plugin Name = "TLS Version 1.1 Protocol Deprecated"  with the Port= 8443 OR =8444 as I have fill color with yellow. But, still keep show the event  that have Plugin Name = "TLS Version 1.1 Protocol Deprecated " with other Port exclude Port 8443 OR 8444. I am using below search and the result show only Plugin Name = "TLS Version 1.1 Protocol Deprecated " with other port exclude Port 8443 OR 8444. I want a result show all Plugin Name...... exclude Plugin Name = "TLS Version 1.1 Protocol Deprecated"  that have  Port= 8443 OR =8444 Any suggestions?
In logs there are multiple lines printed like below  and I want to print all of them in a table but my search query only prints one value , need help to print multiple records  Balance amount is ze... See more...
In logs there are multiple lines printed like below  and I want to print all of them in a table but my search query only prints one value , need help to print multiple records  Balance amount is zero for invoice id:20220402-126-12300-A Balance amount is zero for invoice id:20220502-126-12300-B Balance amount is zero for invoice id:20220602-126-12300-C Need to print like : 20220704-126-77300-A, 20220404-126-77300-A , 20220704-126-77300-A query I am trying : rex field=_raw "Balance amount is zero for invoice id:(?P<InvoiceExceptionNo>\S+)"
Hello community In our distributed environemnt we have a few Hello community In our distributed environment we have a few heavy forwarders set up to deal with zone boundaries and whatnot. Silly... See more...
Hello community In our distributed environemnt we have a few Hello community In our distributed environment we have a few heavy forwarders set up to deal with zone boundaries and whatnot. Silly enough of me, I assumed these would all be configured and humming along, though it turned out that not a single one of them where actually being used. I have looked through the manual, as well as the forum here, though I am still somewhat confused regarding the setup and configuration needed. So I’ll take this step-by-step. We have a universal forwarder set up in a Linux machine set to collect some sys/os logs and a filewatch for application log. Now, the UF connects to the deployment server and fetches the configuration, so far so good. Though nothing shows up on indexers and/or search heads. First of all, I noticed that “Receive data” on the HF was empty, I assume there should be a port listed here so I added the standard port. After this, the server could “curl” connect to the HF, so this seemed like a fantastic start. However, still no log. The local splunkd log in the UF shows: 07-08-2022 13:36:07.718 +0200 ERROR TcpOutputFd [4105329 TcpOutEloop] - Connection to host=<ip>:<port> failed 07-08-2022 13:36:07.719 +0200 ERROR TcpOutputFd [4105329 TcpOutEloop] - Connection to host=<ip>:<port> failed So traffic is allowed though still the UF cannot connect to HF. From what I can tell from other threads, I also need to have the same apps as deployed on the UF installed on the HF? Or am I misinterpreting this? Could this explain the failed connections? I have the inputs correct on the UF, I have the outputs.conf pointing at the HF. The HF sends _internal to indexers so that seems ok. It is just not accepting connections from the UF. What exactly do I need to have on the HF so that log can be “redirected” from UF to IX?
Hi, Does anyone know how I can make the columns in all my tables the same width so that it lines up? Im using the transpose command to fill in the header_field.  
Hi, I need to switch my Splunk Enterprise SH to the european spacebridge server. Does anybody know the correct URL? Can I just switch by pointing to the other server in securegateway.conf? Many ... See more...
Hi, I need to switch my Splunk Enterprise SH to the european spacebridge server. Does anybody know the correct URL? Can I just switch by pointing to the other server in securegateway.conf? Many thanks in advance Best Regards Wolfgang
New to cybersecurity, been in my first entry level job for 6 months. New to splunk, took some classes but they were quick and didn't detail a whole lot, splunk instructor read the slides basically. ... See more...
New to cybersecurity, been in my first entry level job for 6 months. New to splunk, took some classes but they were quick and didn't detail a whole lot, splunk instructor read the slides basically. Ran into issue, red warning on 8.2.4 The percentage of small buckets (100%) created over the last hour is high and exceeded the red thresholds (50%) for index=mail, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=5, small buckets=1 and then it would list that last 50 related messages, early this morning it did....but now it says 'None' It happens on indexer #7 of the 8 we have. Crawling the web to gain some understanding. Found this link: Solved: The percentage of small of buckets is very high an... - Splunk Community   The OP had the same issues but talked about timeparsing and they were able to fix. - what is and how to fix timeparsing?   I am not great with search strings, regex, etc...splunk just kind of fell in my lap. I tried to follow the search string that @jacobpevans  wrote up in reply to the post above, not sure if I follow it well. basically its serching for each hot bucket for index _internal sourcetype splunkd, listing those hot buckets that are moving to warm, renaming to index to join, join command then joins each intance to a rollover.  I run his string as is, and get a lot of readout listing many indexes but not index mail as indicated in the warning. Also the readout shows 4 rows (2 indexes on 2 different indexers) with Violation and 100% small buckets.  I would like to resolve this issue but i am seriously lost, haha. I think splunk may be the death of my career even before i get started.  
Hi ,   Some users are seeing this error while running search query "Invalid_adhoc_search_level" - issue is intermittent.  Did someone else faced similar issue . If yes, why is it happening and ... See more...
Hi ,   Some users are seeing this error while running search query "Invalid_adhoc_search_level" - issue is intermittent.  Did someone else faced similar issue . If yes, why is it happening and how to fix it.  Splunk version - 8.2.6  
Hello, I am trying albeit unsuccessfully to add the multiline configuration into helm values file for sck install.  I am unable to get any custom filter to stick at fresh install or upgrade.  Each ... See more...
Hello, I am trying albeit unsuccessfully to add the multiline configuration into helm values file for sck install.  I am unable to get any custom filter to stick at fresh install or upgrade.  Each time the lines that control multiline are wiped out.  Does anyone have an example of injecting this into values.yaml for helm sck install or upgrade? Trying to get multiline logs in below to apply @ install or upgrade time. GitHub - splunk/splunk-connect-for-kubernetes: Helm charts associated with kubernetes plug-ins   Thanks
We have a home grown application that pings Google DNS on a regular basis.  We are ingesting the data from our Meraki wireless devices and I would like to filter out the ICMP messages with the destin... See more...
We have a home grown application that pings Google DNS on a regular basis.  We are ingesting the data from our Meraki wireless devices and I would like to filter out the ICMP messages with the destination of 8.8.8.8.  Our events look like this: 7/8/22 8:14:51.427 AM 2022-07-08 07:14:51.427 xxx.xxx.xxx.xxx 1 Location_XXX flows src=xxx.xxx.0.1 dst=8.8.8.8 mac=70:D3:79:XX:XX:XX protocol=icmp type=8 pattern: allow icmp host = xxx.xx.0.2source = /syslog0/syslog/meraki/xxx.xx.0.2/messages.log sourcetype = meraki What would be the most efficient way to filter these messages to help reduce license usage? 
Hello, I would like to be able to create a serverclass based on our inventory, which is indexed in Splunk. The problem with using wildcard is that our servers don't have a sufficiently detailed nam... See more...
Hello, I would like to be able to create a serverclass based on our inventory, which is indexed in Splunk. The problem with using wildcard is that our servers don't have a sufficiently detailed name to be able to determine the type of database that is running on it. Example : vm-db-1 -> MariaDB vm-db-2 -> PostgreSQL vm-db-3 -> MariaDB vm-db-4 -> Oracle With a Splunk query, I can easily find this information. Is there a solution to my problem? Thanks for your help
There are logs with contents like [{timestamp: xxx, duraton: 5,  url: "/foo1", status: 200}, {timestamp: xxx, duraton: 7,  url: "/foo2", status: 200}, {duraton: 6,  url: "/foo1", status: 200}...] ... See more...
There are logs with contents like [{timestamp: xxx, duraton: 5,  url: "/foo1", status: 200}, {timestamp: xxx, duraton: 7,  url: "/foo2", status: 200}, {duraton: 6,  url: "/foo1", status: 200}...] I'd like stats the throughput and latency with sparkline. Now I can get the avg sparkline, however, if there is a way to get the p50 sparkline, p90 sparkline or so, the avg latency sparkline might not be helpful enough. Sample query is like below.  ...  earliest=-1d@d latest=@d | stats     sparkline(count, 5m) as throughput,     sparkline(avg(duration), 5m) as latency,     count as total_requests,     p50(duration) as duration_p50,     p90(duration) as duration_p90,     p99(duration) as duration_p99
Base query: index=jenkins* teamcenter |search event_tag=job_event |search build_url=*TC_Active* |where isnotnull(job_duration) |rex field=job_name "(?<app>[^\.]*)\/(?<repo>[^\.]*)\/(?<jobname>[^\.]... See more...
Base query: index=jenkins* teamcenter |search event_tag=job_event |search build_url=*TC_Active* |where isnotnull(job_duration) |rex field=job_name "(?<app>[^\.]*)\/(?<repo>[^\.]*)\/(?<jobname>[^\.].*)" |rex field=metadata.GIT_BRANCH_NAME "(?<branch>.*)" |rex field=user "(?<user>[^\.]*)" |search app="*" AND repo="*" AND jobname="*" AND branch="*" AND user="*" | eval string_dur = tostring(round(job_duration), "duration") | eval formatted_dur = replace(string_dur,"(?:(\d+)\+)?0?(\d+):0?(\d+):0?(\d+)","\1d \2h \3m \4s") |rename job_started_at AS DateTime app AS Repository branch AS Branch jobname AS JobName job_result AS Job_Result formatted_dur AS Job_Duration "stages{}.name" AS "Stage View" "stages{}.duration" AS Duration |table DateTime Repository Branch JobName Job_Result Job_Duration "Stage View" Duration Output:  DateTime Repository Branch JobName Job_Result Job_Duration Stage View Duration 2022-07-07T11:47:39Z TeamCenter/TC_Active/TCUA_Builds release/ALM_TC15.5 AMAT_Key_Part_Family_Extraction SUCCESS d 0h 15m 35s Preparation Sonar Analysis Build Save Artifacts 108.817 419.698 15.819 376.698 2022-07-07T17:14:49Z TeamCenter/TC_Active/Portal release/ALM_TC15.5 com.amat.rac SUCCESS d 0h 25m 49s Preparation Sonar Analysis Build Save Artifacts 105.014 1309.388 29.486 101.647   Need to add another column in output "stage_duration" which will convert "Duration" field value in  "Day Hr Min Sec" format. 
host="SPL-SH-DC" sourcetype="ABCSW"......| search "Plugin Name" != "TLS Version 1.1 Protocol Deprecated" AND Port != "8443" AND Port != "8444" | table "IP Address",Host_Name,"Plugin Name",Severity,P... See more...
host="SPL-SH-DC" sourcetype="ABCSW"......| search "Plugin Name" != "TLS Version 1.1 Protocol Deprecated" AND Port != "8443" AND Port != "8444" | table "IP Address",Host_Name,"Plugin Name",Severity,Protocol,Port,Exploit,System_Type,Synopsis,Description,Solution,"See Also","CVSS V2 Base Score",CVE,Plugin,status,Pending_since,source Hi Splunker, Could you please help.. I have a query as I have put above . However,  I want a result query with filter Field " Plugin Name " not equal "TLS Version 1.1 Protocol Deprecated" but base on Field "Port" equal  "8443" and " 8444". I will be appreciate for your help. 
Hello Splunkers, Splunk crashes on our Linux core servers (Master, Search Heads and Heavy Forwarders, indexers are not affected) with "ENGINE: Bus STOPPED" errors.  When I check splunkd status on ... See more...
Hello Splunkers, Splunk crashes on our Linux core servers (Master, Search Heads and Heavy Forwarders, indexers are not affected) with "ENGINE: Bus STOPPED" errors.  When I check splunkd status on host, it turns out that: se1234@z1il1234:~> /opt/splunk/bin/splunk status splunkd 130820 was not running. Stopping splunk helpers...Done. Stopped helpers. Removing stale pid file... done. It is an intermittent issue. Today in the morning we had this issue on SH1, yesterday on another SH2 and a week before on SH4. We have checked Resource usage and there is nothing shown in DMC that may cause splunkd to crash (e.g. high CPU usage) Any idea what is worth checking or how to troubleshoot? Greetings, Dzasta