All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I need to use Dashboard Studio to ensure the PDF looks well-formatted. I am new to Studio, but was able to do everything except add a report. I need the ability to append a report as the last panel... See more...
I need to use Dashboard Studio to ensure the PDF looks well-formatted. I am new to Studio, but was able to do everything except add a report. I need the ability to append a report as the last panel, so if the report is lengthy, it will continue to continue across multiple pages. When I add the report, it attempts to create but it just makes the page incredibly long, not paginated as it should be. My goal is to have several graphic panels and a lengthy report that will be exported every night to 3-4 page PDF so the recipient can review it.
Anyone know if there is a schema I can load into my IDE so when I modify a dashboard json definition I can detect errors and invalid kv pairs?
Code is easier to explain: I wanted a bunch of new categories and i found eval especially useful - here is an obfuscated example | index=my_index CONNECTED source="/var/log/vmware/my_log.log" | eva... See more...
Code is easier to explain: I wanted a bunch of new categories and i found eval especially useful - here is an obfuscated example | index=my_index CONNECTED source="/var/log/vmware/my_log.log" | eval vdi_pool=case( match(name,"1A-VDI\d{3}"), "pool1", match(name,"1B-VDI\d{3}"), "pool2", match(name,"2A-VDI\d{3}"), "pool3", match(name,"2B-VDI\d{3}"), "pool4", match(name,"3A-VDI\d{3}"), "pool5", match(name,"3B-VDI\d{3}"), "pool6", 1=1, "unclassified" ) | timechart span=1h count by vdi_pool  This made the subsequent querys super easy.   Irritatingly within the dashboard, if I add a new value I need to update all of the queries - this vexes me greatly  I have noticed the entire definition can be downloaded as a json doc - so Im tempted to start templating this in python - this does not seem sane - ideally I'd like to create blocks of repeatable logic I can assemble together to show different scenarios. Anyone done anything similar to achieve this kind of capability - but more "splunkonic"? 
Howdy Splunkers! WARNING: NEWBIE I was after a preset drop down, and I'm after something more 'splunkonic' on the dashboard studio. Classic dashboards seem to come with the convenience of the rela... See more...
Howdy Splunkers! WARNING: NEWBIE I was after a preset drop down, and I'm after something more 'splunkonic' on the dashboard studio. Classic dashboards seem to come with the convenience of the relative timeset and so on: really useful.  Dashboard Studio is quite nice and after head scratching I wanted feature parity. So to either completely substitute the queryParameters dictionary with a bunch of drop downs - or create an object like this: "queryParameters": { "earliest": "$query.earliest_time$", "latest": "$query.latest_time$" }   If I was using python Id create a class or just insert a dict - how is everyone else managing?  Currently I have a ealiest_time input and a latest_time input drop down and substitute he values of the keys above:   "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" }  
Hi all  If my understanding is correct then data will roll from hot to warm after 90 days. I check the time on index.conf it is mentioned 90 days. My concern 1. But for certain index I can... See more...
Hi all  If my understanding is correct then data will roll from hot to warm after 90 days. I check the time on index.conf it is mentioned 90 days. My concern 1. But for certain index I can see only see 56 days of data not 90 days. 2. A device from a index is last reporting on 30th of April now if I go and give a time frame of all time I will get no match or no data from that device.  Can anyone guide me why there is a deviation of rolling of data from hot to warm. 
Hello , I'm trying to install DB-Connector on my server  after reboot  I have ty to start it - but  I get this message  Missing or malformed messages.conf stanza for MOD_INPUT:INIT_FAILURE__server... See more...
Hello , I'm trying to install DB-Connector on my server  after reboot  I have ty to start it - but  I get this message  Missing or malformed messages.conf stanza for MOD_INPUT:INIT_FAILURE__server_the app "splunk_app_db_connect"_Introspecting scheme=server: script running failed (exited with code 1).   what do I need to do?  the server is working and running  Thanks ,   
Hello,  I have a running Splunk server , and I want to add to him a MYSQL addon. I have added  Splunk Add-on for MySQL and I can see it in the "APP" list: Splunk Add-on for MySQL Sp... See more...
Hello,  I have a running Splunk server , and I want to add to him a MYSQL addon. I have added  Splunk Add-on for MySQL and I can see it in the "APP" list: Splunk Add-on for MySQL Splunk_TA_mysql 3.0.0 Yes Yes Global | Permissions Enabled | Disable Launch app | Edit properties | View objects | View details on Splunkbase and also in the "Home Page" but I can't to config it . also when I press the "MYSQL" in the home page I get error I have try to reboot the server - didn't help.   Oops. Page not found! Click here to return to Splunk homepage.   what am I missing?   
Hello,    I have setup a cluster behind aws NLB, about a month back, suddenly the UI is taking too long to load, i tried tweaking web.conf cache size, no help. The page loads and even splunk de... See more...
Hello,    I have setup a cluster behind aws NLB, about a month back, suddenly the UI is taking too long to load, i tried tweaking web.conf cache size, no help. The page loads and even splunk default common.js loads with 200, thus sticking firewall, and infra usage on SH's is in single digits ( no user .. no load  )  Any idea folks.....   Network Trace
Hi all, I am currently configuring Splunk Enterprise Security for Alerts. I have a doubt in the implementation of this solution.  I created a alert for Failed logins from Windows devices. If this... See more...
Hi all, I am currently configuring Splunk Enterprise Security for Alerts. I have a doubt in the implementation of this solution.  I created a alert for Failed logins from Windows devices. If this alert is triggered, team is running some queries manually to collect more details such as the pattern of that user account in the past 30 days, or the servers to which the user account has logged in the past 30 days to identify a baseline or to investigate whether there are any anomaly in the usage of that account. My doubt is whether there are any way to automate this process. Like if the above alert triggers, then the subsequent queries which the team is currently running manually can be automated and show the results some where in the alert itself where the team can go and see either in tabular or graphical format. Could you please suggest a solution for this.  Any input on how to set up Enterprise security as a SOC detection and workflow is much appreciated. 
Hi, I know this is a hot topic and there is answers everywhere, but i couldn't figure out by my self what to do.   Suddenly the join stops work and my search is not performing as spect anymore, nob... See more...
Hi, I know this is a hot topic and there is answers everywhere, but i couldn't figure out by my self what to do.   Suddenly the join stops work and my search is not performing as spect anymore, nobody from infa gave me a reasonable explanation for that, so i have to figure out a different way .   Original Search   index=aws-prd-01 application.name=domestic-batch context=BATCH action=SEND_EMAIL (status=STARTED OR status="NOT RUN") | rename status as initialStatus | fields jobId initialStatus | join type=left jobId [search index=aws-prd-01 application.name=domestic-batch context=BATCH action=SEND_EMAIL (status=COMPLETED OR status=FAILED) | rename status as finalStatus | fields jobId finalStatus] | table jobId initialStatus finalStatus | sort -timestamp     Original result jobId initialStatus finalStatus 01 STARTED COMPLETED 02 STARTED FAILED   First search with no changes   index=aws-prd-01 application.name=domestic-batch context=BATCH action=SEND_EMAIL (status=STARTED OR status="NOT RUN") | table jobId, status     Result jobId status 01 STARTED 02 STARTED   Second search with no changes   index=aws-prd-01 application.name=domestic-batch context=BATCH action=SEND_EMAIL (status=COMPLETED OR status=FAILED) | table jobId, status     Result jobId status 01 COMPLETED 02 FAILED   thanks a lot
Hello, I have a field uptime in seconds as 1231456, Can some one help me with the eval expression to convert this to HH:MM:SS as a new field in a table.   Thanks in advance.
Splunk DB Connect 3.9 won't accept the Java Runtime "jdk-18_linux-x64_bin.tar.gz". When DB Connect prompts me to input the JRE Installation path in Configuration > Settings, it throws this error "Ne... See more...
Splunk DB Connect 3.9 won't accept the Java Runtime "jdk-18_linux-x64_bin.tar.gz". When DB Connect prompts me to input the JRE Installation path in Configuration > Settings, it throws this error "Need Oracle Corporation JRE version 1.8 or OpenSDK 1.8". I don't seem to get the link that will take me to download this JRE 1.8 version. Can someone point me in the right direction to get the right JRE version downloaded. Thanks in anticipation.
Hello A customer has a single-spa microfrontend app (https://single-spa.js.org/docs/getting-started-overview). After installing Brum, it will report an error. See the below screenshot. Is there a... See more...
Hello A customer has a single-spa microfrontend app (https://single-spa.js.org/docs/getting-started-overview). After installing Brum, it will report an error. See the below screenshot. Is there any suggestion for this case?  thanks 
I need to get the list of the IPs that have generated the most outgoing traffic. When the query is generated I find that there are multiple records for the same IP. Is there any way to get a tota... See more...
I need to get the list of the IPs that have generated the most outgoing traffic. When the query is generated I find that there are multiple records for the same IP. Is there any way to get a total of GB for each IP? Thank you  
I don't have a ton of experience with Splunk yet but I've been asked to find API endpoints (which appear to be in our raw data) and see how often their being used.    Example Events: | 2022-07-... See more...
I don't have a ton of experience with Splunk yet but I've been asked to find API endpoints (which appear to be in our raw data) and see how often their being used.    Example Events: | 2022-07-08 05:59:06 21.30.2.80 POST /api/transact/credit/sale 5051 - 571.232.505.62 okhttp/3.18.9 | 2022-07-08 05:02:01 22.35.3.79 POST /api/transact/device 6062 - 641.141.323.82 okhttp/2.15.3   What I want to end up with is the api and a count: /api/transact/credit/sale        3,475 /api/transact/device                    275   Is this possible? Thank you!!
Splunk Enterprise 8.2.3.3 on Linux In our implementation, I'm using a cluster app on our Indexer and Search Head clusters to control LDAP authentication. I have two separate apps (due to different ... See more...
Splunk Enterprise 8.2.3.3 on Linux In our implementation, I'm using a cluster app on our Indexer and Search Head clusters to control LDAP authentication. I have two separate apps (due to different authentication needs on each) but essentially the same basic LDAP configuration. This has been working fine since inception but we recently had to update the password used to connect to the LDAP server. I thought it would be a matter of simply updating the password in the 'default' authentication.conf in each of the apps and then deploying an app bundle to each cluster. I assumed that the 'local' authentication.conf, which normally gets created on each node with an encrypted version of the app password, would get updated with a new encrypted password on each of the cluster nodes as part of the bundle push. The bundle deployments worked fine, but LDAP authentication was not working afterwards. The 'local' authentication.conf did not get updated during the app bundle push to either cluster and the way I got it working was: 1. Manually remove the app's 'local' authentication.conf from all of the Indexer and Search Head nodes 2. Do a rolling restart of each cluster After that, the LDAP authentication worked correctly. Is that expected? Is there a better way of doing this? Any issues with my use of 'default' / 'local' for these purposes? Thanks in advance for any thoughts.
I have read a lot of different threads and docs but still having trouble pulling what I need out of the below JSON. Essentially want a condensed list of the vulnerabilities data but this JSON nests t... See more...
I have read a lot of different threads and docs but still having trouble pulling what I need out of the below JSON. Essentially want a condensed list of the vulnerabilities data but this JSON nests the vulnerabilities based on the "Package". I would like a table that lists all the applicable vulns and for each image. Table I am trying to get Image Name (CVE) NVD_Score Description etc... Image_name CVE-2022-0530 4.3 A flaw was found....     Image of JSON example I can include raw data if that would help.  
Good day friends... I expose the following issue: A little over a month ago we upgraded the splunk version from 7.0 to 8.1.7.2, I do not know if because of the upgrade splunk no longer let me c... See more...
Good day friends... I expose the following issue: A little over a month ago we upgraded the splunk version from 7.0 to 8.1.7.2, I do not know if because of the upgrade splunk no longer let me create users marking the following error: "In handler 'users': Could not get info for role that does not exist: windows-admin". I also removed the apps that splunk had and that are compatible, among them "Splunk App for Windows Infrastructure". I don't know if this or the above generated this problem. Can you help me if anyone has had this happen and how did you solve it? thanks
Hi All, I'm trying to make a 1 month of a dexter report in Appdynamics. But unable to do it. Kindly share the steps for monthly report generation.
I have logs from switches being ingested, but the data doesn't conform to any standard data model. Is this possible or