All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello A customer has a single-spa microfrontend app (https://single-spa.js.org/docs/getting-started-overview). After installing Brum, it will report an error. See the below screenshot. Is there a... See more...
Hello A customer has a single-spa microfrontend app (https://single-spa.js.org/docs/getting-started-overview). After installing Brum, it will report an error. See the below screenshot. Is there any suggestion for this case?  thanks 
I need to get the list of the IPs that have generated the most outgoing traffic. When the query is generated I find that there are multiple records for the same IP. Is there any way to get a tota... See more...
I need to get the list of the IPs that have generated the most outgoing traffic. When the query is generated I find that there are multiple records for the same IP. Is there any way to get a total of GB for each IP? Thank you  
I don't have a ton of experience with Splunk yet but I've been asked to find API endpoints (which appear to be in our raw data) and see how often their being used.    Example Events: | 2022-07-... See more...
I don't have a ton of experience with Splunk yet but I've been asked to find API endpoints (which appear to be in our raw data) and see how often their being used.    Example Events: | 2022-07-08 05:59:06 21.30.2.80 POST /api/transact/credit/sale 5051 - 571.232.505.62 okhttp/3.18.9 | 2022-07-08 05:02:01 22.35.3.79 POST /api/transact/device 6062 - 641.141.323.82 okhttp/2.15.3   What I want to end up with is the api and a count: /api/transact/credit/sale        3,475 /api/transact/device                    275   Is this possible? Thank you!!
Splunk Enterprise 8.2.3.3 on Linux In our implementation, I'm using a cluster app on our Indexer and Search Head clusters to control LDAP authentication. I have two separate apps (due to different ... See more...
Splunk Enterprise 8.2.3.3 on Linux In our implementation, I'm using a cluster app on our Indexer and Search Head clusters to control LDAP authentication. I have two separate apps (due to different authentication needs on each) but essentially the same basic LDAP configuration. This has been working fine since inception but we recently had to update the password used to connect to the LDAP server. I thought it would be a matter of simply updating the password in the 'default' authentication.conf in each of the apps and then deploying an app bundle to each cluster. I assumed that the 'local' authentication.conf, which normally gets created on each node with an encrypted version of the app password, would get updated with a new encrypted password on each of the cluster nodes as part of the bundle push. The bundle deployments worked fine, but LDAP authentication was not working afterwards. The 'local' authentication.conf did not get updated during the app bundle push to either cluster and the way I got it working was: 1. Manually remove the app's 'local' authentication.conf from all of the Indexer and Search Head nodes 2. Do a rolling restart of each cluster After that, the LDAP authentication worked correctly. Is that expected? Is there a better way of doing this? Any issues with my use of 'default' / 'local' for these purposes? Thanks in advance for any thoughts.
I have read a lot of different threads and docs but still having trouble pulling what I need out of the below JSON. Essentially want a condensed list of the vulnerabilities data but this JSON nests t... See more...
I have read a lot of different threads and docs but still having trouble pulling what I need out of the below JSON. Essentially want a condensed list of the vulnerabilities data but this JSON nests the vulnerabilities based on the "Package". I would like a table that lists all the applicable vulns and for each image. Table I am trying to get Image Name (CVE) NVD_Score Description etc... Image_name CVE-2022-0530 4.3 A flaw was found....     Image of JSON example I can include raw data if that would help.  
Good day friends... I expose the following issue: A little over a month ago we upgraded the splunk version from 7.0 to 8.1.7.2, I do not know if because of the upgrade splunk no longer let me c... See more...
Good day friends... I expose the following issue: A little over a month ago we upgraded the splunk version from 7.0 to 8.1.7.2, I do not know if because of the upgrade splunk no longer let me create users marking the following error: "In handler 'users': Could not get info for role that does not exist: windows-admin". I also removed the apps that splunk had and that are compatible, among them "Splunk App for Windows Infrastructure". I don't know if this or the above generated this problem. Can you help me if anyone has had this happen and how did you solve it? thanks
Hi All, I'm trying to make a 1 month of a dexter report in Appdynamics. But unable to do it. Kindly share the steps for monthly report generation.
I have logs from switches being ingested, but the data doesn't conform to any standard data model. Is this possible or  
Please help me load jquery-ui into a dashboard xml Also, can i load the jquery-ui css inside the require.conf? in the browser console, i'm getting this error:  JQuery Version: 3.6.0 VM4289:50 U... See more...
Please help me load jquery-ui into a dashboard xml Also, can i load the jquery-ui css inside the require.conf? in the browser console, i'm getting this error:  JQuery Version: 3.6.0 VM4289:50 Uncaught TypeError: Cannot read properties of undefined (reading 'ui') at eval (eval at <anonymous> (dashboard.js:1276:187236), <anonymous>:50:20) at Object.execCb (eval at module.exports (dashboard.js:632:662649), <anonymous>:1658:33) at Module.check (eval at module.exports (dashboard.js:632:662649), <anonymous>:874:51) at Module.eval (eval at module.exports (dashboard.js:632:662649), <anonymous>:1121:34) at eval (eval at module.exports (dashboard.js:632:662649), <anonymous>:132:23) at eval (eval at module.exports (dashboard.js:632:662649), <anonymous>:1164:21) at each (eval at module.exports (dashboard.js:632:662649), <anonymous>:57:31) at Module.emit (eval at module.exports (dashboard.js:632:662649), <anonymous>:1163:17) at Module.check (eval at module.exports (dashboard.js:632:662649), <anonymous>:925:30) at Module.enable (eval at module.exports (dashboard.js:632:662649), <anonymous>:1151:22)       require.config({ waitSeconds: 0, paths: { 'localjquery':'/static/app/myapp/lib/jquery.min', 'jqueryui':'/static/app/myapp/lib/jquery-ui.min' }, shim: { 'jqueryui': { deps: ['localjquery'] } } }); require([ // 'splunkjs/ready!', 'underscore', 'backbone', 'localjquery', 'splunkjs/mvc', 'jqueryui', 'splunkjs/mvc/simplexml/ready!' ], function (_,Backbone, $, mvc) { defaultTokenModel = mvc.Components.get("default"); console.log("JQuery Version:"); console.log(jQuery().jquery); console.log("JQuery-UI Version:"); console.log($.ui.version); });       Dashboard     <dashboard script="input_slider_range.js" stylesheet="lib/jquery-ui.min.css"> <label>slider range</label> <row> <panel> <html> SLIDER <p> <label for="amount">Price range:</label> <input type="text" id="amount" readonly="true" style="border:0; color:#f6931f; font-weight:bold;" /> </p> <div id="slider-range"></div> </html> </panel> </row> </dashboard>      
Hello, My goal is to create a ping test for several cameras we have onsite. I'm looking for advice on this issue. We are using a software called Genetec for our cameras (not sure if this can be int... See more...
Hello, My goal is to create a ping test for several cameras we have onsite. I'm looking for advice on this issue. We are using a software called Genetec for our cameras (not sure if this can be integrated with Splunk nor if it is completely necessary).  Details: Currently I have Splunk Cloud (3 on-premise Hosts AWS) Cameras that connect to a physical router at our office. I have access to the physical router used by the cameras.  Private IPs within a VLAN.  I have access to the cameras' private IPs and can ping them when connected to our VPN.  Genetec software used for our cameras.  Goals: Create Alerts when the cameras go down via a ping test. Possibly create a dashboard showing each's camera's availability (meaning if it is on or off)  
Hello peeps, Currently I have a list of processing times. And I am trying to create a dashboard that shows the average time, max time, and the count of how many times that action is processed. inde... See more...
Hello peeps, Currently I have a list of processing times. And I am trying to create a dashboard that shows the average time, max time, and the count of how many times that action is processed. index=IndexA | stats avg(cTotal) max(cTotal) count(cTotal) by name | sort 10 -count(cTotal) Totaltime is a given header and an example is  When I run the script, I get accurate results for count and max, but the avg is not currently working. I'm thinking it's because of the time string (hours, min, sec, milisec) but if anyone has any advice on how to make the average work, I would love to hear it!
We are trying to filter out events from a Syslog server that is ingesting data for a number of sources but the one we are trying to filter is from our Meraki devices.  Each Meraki is considered a sou... See more...
We are trying to filter out events from a Syslog server that is ingesting data for a number of sources but the one we are trying to filter is from our Meraki devices.  Each Meraki is considered a source and the sourcetype is meraki.  This is a sample of the events coming into Splunk: 2022-07-08 07:14:51.427 xxx.xxx.xxx.xxx 1 Location_XXX flows src=xxx.xxx.0.1 dst=8.8.8.8 mac=70:D3:79:XX:XX:XX protocol=icmp type=8 pattern: allow icmp host = xxx.xx.0.2source = /syslog0/syslog/meraki/xxx.xx.0.2/messages.log sourcetype = meraki There are more than 100 sources all using the format:  /syslog0/syslog/meraki/<IP Address>/messages.log How can I put that source in props.conf without listing each one separately? 
We recently upgraded our on prem Splunks to version 9.0.0 and now any time we click on our home grown Dashboards we get this error: This dashboard version is missing. Update the dashboard version ... See more...
We recently upgraded our on prem Splunks to version 9.0.0 and now any time we click on our home grown Dashboards we get this error: This dashboard version is missing. Update the dashboard version in source   we were on an older version of Splunk 8.x for a while so no idea when this has changed  
Hi,  I have two event fields with the same name "timestamp". I just want to display (in stats) the "timestamp" field from the "ResponseReceive" logEventType. Not the one from logType "SystemLog". C... See more...
Hi,  I have two event fields with the same name "timestamp". I just want to display (in stats) the "timestamp" field from the "ResponseReceive" logEventType. Not the one from logType "SystemLog". Currently is displays both.  Is there a way to do this? Any assistance is appreciated. Thank you!! ... | fields timestamp, apiName, apiVersion, ceoCompanyId, entityId, sessionId, transactionDetailsResponse.transactionDetailsList.totalCount, transactionDetailsResponse.transactionDetailsList.transactionDetails{}.acctNumber, transactionDetailsResponse.transactionDetailsList.transactionDetails{}.Amount, transactionDetailsResponse.transactionDetailsList.transactionDetails{}.tranDateTime, transactionDetailsResponse.transactionDetailsList.transactionDetails{}.totalTranCount | rename transactionDetailsResponse.transactionDetailsList.totalCount AS "TransactionCount", transactionDetailsResponse.transactionDetailsList.transactionDetails{}.acctNumber AS "AcctNum", transactionDetailsResponse.transactionDetailsList.transactionDetails{}.Amount AS "Amount", transactionDetailsResponse.transactionDetailsList.transactionDetails{}.tranDateTime AS "TranDateTime", transactionDetailsResponse.transactionDetailsList.transactionDetails{}.totalTranCount AS "TotalTranCount" | stats values(timestamp) AS timestamp, values(TranDateTime) AS TranDateTime, values(apiName) AS apiName, values(apiVersion) AS apiVersion, values(ceoCompanyId) AS ceoCompanyId, values(entityId) AS entityId, values(TotalTranCount) AS TotalTranCount, values(AcctNum) AS AcctNum, by sessionId,    
Hi All, I have this report    My requirement is only show in table those event that do not have the Plugin Name = "TLS Version 1.1 Protocol Deprecated"  with the Port= 8443 OR =8444 as I ha... See more...
Hi All, I have this report    My requirement is only show in table those event that do not have the Plugin Name = "TLS Version 1.1 Protocol Deprecated"  with the Port= 8443 OR =8444 as I have fill color with yellow. But, still keep show the event  that have Plugin Name = "TLS Version 1.1 Protocol Deprecated " with other Port exclude Port 8443 OR 8444. I am using below search and the result show only Plugin Name = "TLS Version 1.1 Protocol Deprecated " with other port exclude Port 8443 OR 8444. I want a result show all Plugin Name...... exclude Plugin Name = "TLS Version 1.1 Protocol Deprecated"  that have  Port= 8443 OR =8444 Any suggestions?
In logs there are multiple lines printed like below  and I want to print all of them in a table but my search query only prints one value , need help to print multiple records  Balance amount is ze... See more...
In logs there are multiple lines printed like below  and I want to print all of them in a table but my search query only prints one value , need help to print multiple records  Balance amount is zero for invoice id:20220402-126-12300-A Balance amount is zero for invoice id:20220502-126-12300-B Balance amount is zero for invoice id:20220602-126-12300-C Need to print like : 20220704-126-77300-A, 20220404-126-77300-A , 20220704-126-77300-A query I am trying : rex field=_raw "Balance amount is zero for invoice id:(?P<InvoiceExceptionNo>\S+)"
Hello community In our distributed environemnt we have a few Hello community In our distributed environment we have a few heavy forwarders set up to deal with zone boundaries and whatnot. Silly... See more...
Hello community In our distributed environemnt we have a few Hello community In our distributed environment we have a few heavy forwarders set up to deal with zone boundaries and whatnot. Silly enough of me, I assumed these would all be configured and humming along, though it turned out that not a single one of them where actually being used. I have looked through the manual, as well as the forum here, though I am still somewhat confused regarding the setup and configuration needed. So I’ll take this step-by-step. We have a universal forwarder set up in a Linux machine set to collect some sys/os logs and a filewatch for application log. Now, the UF connects to the deployment server and fetches the configuration, so far so good. Though nothing shows up on indexers and/or search heads. First of all, I noticed that “Receive data” on the HF was empty, I assume there should be a port listed here so I added the standard port. After this, the server could “curl” connect to the HF, so this seemed like a fantastic start. However, still no log. The local splunkd log in the UF shows: 07-08-2022 13:36:07.718 +0200 ERROR TcpOutputFd [4105329 TcpOutEloop] - Connection to host=<ip>:<port> failed 07-08-2022 13:36:07.719 +0200 ERROR TcpOutputFd [4105329 TcpOutEloop] - Connection to host=<ip>:<port> failed So traffic is allowed though still the UF cannot connect to HF. From what I can tell from other threads, I also need to have the same apps as deployed on the UF installed on the HF? Or am I misinterpreting this? Could this explain the failed connections? I have the inputs correct on the UF, I have the outputs.conf pointing at the HF. The HF sends _internal to indexers so that seems ok. It is just not accepting connections from the UF. What exactly do I need to have on the HF so that log can be “redirected” from UF to IX?
Hi, Does anyone know how I can make the columns in all my tables the same width so that it lines up? Im using the transpose command to fill in the header_field.  
Hi, I need to switch my Splunk Enterprise SH to the european spacebridge server. Does anybody know the correct URL? Can I just switch by pointing to the other server in securegateway.conf? Many ... See more...
Hi, I need to switch my Splunk Enterprise SH to the european spacebridge server. Does anybody know the correct URL? Can I just switch by pointing to the other server in securegateway.conf? Many thanks in advance Best Regards Wolfgang
New to cybersecurity, been in my first entry level job for 6 months. New to splunk, took some classes but they were quick and didn't detail a whole lot, splunk instructor read the slides basically. ... See more...
New to cybersecurity, been in my first entry level job for 6 months. New to splunk, took some classes but they were quick and didn't detail a whole lot, splunk instructor read the slides basically. Ran into issue, red warning on 8.2.4 The percentage of small buckets (100%) created over the last hour is high and exceeded the red thresholds (50%) for index=mail, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=5, small buckets=1 and then it would list that last 50 related messages, early this morning it did....but now it says 'None' It happens on indexer #7 of the 8 we have. Crawling the web to gain some understanding. Found this link: Solved: The percentage of small of buckets is very high an... - Splunk Community   The OP had the same issues but talked about timeparsing and they were able to fix. - what is and how to fix timeparsing?   I am not great with search strings, regex, etc...splunk just kind of fell in my lap. I tried to follow the search string that @jacobpevans  wrote up in reply to the post above, not sure if I follow it well. basically its serching for each hot bucket for index _internal sourcetype splunkd, listing those hot buckets that are moving to warm, renaming to index to join, join command then joins each intance to a rollover.  I run his string as is, and get a lot of readout listing many indexes but not index mail as indicated in the warning. Also the readout shows 4 rows (2 indexes on 2 different indexers) with Violation and 100% small buckets.  I would like to resolve this issue but i am seriously lost, haha. I think splunk may be the death of my career even before i get started.