All Topics

Top

All Topics

Hi all,  I am integrating a Splunk form/dashboard with SOAR, where I use "sendtophantom" to create a container on which a playbook needs to run.  However, what I am noticing is that when the contai... See more...
Hi all,  I am integrating a Splunk form/dashboard with SOAR, where I use "sendtophantom" to create a container on which a playbook needs to run.  However, what I am noticing is that when the container has multiple artifacts, the playbook takes all the artifacts' CEF fields and combines them into one, which then causes havoc with my playbooks. I have considered changing the ingest settings to send MV fields as a list instead of creating new artifacts, but this will break too many other playbooks, so it isn't an option right now.  My flow is basically as follows:  Container gets created with information coming from splunk artifact(s) contain subject and sender email information Playbook needs to run through each artifact to get the subject and sender info  Playbook processes these values Is there a way to specify that a playbook must run against each artifact in a container individually, or another way to alter the datapaths in the VPE to run through each artifact? 
Currently on Splunk ES 7.3.2 Splunk Enterprise Security  where i can see users, who used to be part of the organisation, but are now deleted/disabled (in Splunk) are still populating when i try to as... See more...
Currently on Splunk ES 7.3.2 Splunk Enterprise Security  where i can see users, who used to be part of the organisation, but are now deleted/disabled (in Splunk) are still populating when i try to assign new investigations to other current members of the organisation For instance, Incident Review -> Notable -> Create Investigation In the investigation panel, when i try to assign the investigation to other members of the team, i can also see disabled/deleted accounts/users/members as an option to assign the investigation to. Any way we can remove these members from populating so that the list of investigators replicate the current numbers we have in the team.
Hello, why I cant see on my dashboard Studio, option export csv ? Our version splunk version is 9.2.1 Thks.
Can I ask a question about Splunk? I am using the feature that allows me to embed report jobs into HTML using iFrame. However, even though I have 140 job results in Splunk, only 20 are being displa... See more...
Can I ask a question about Splunk? I am using the feature that allows me to embed report jobs into HTML using iFrame. However, even though I have 140 job results in Splunk, only 20 are being displayed on the embedded HTML. Does anyone know how to solve this issue?
Missing indexes Any one have a way to investigate what causes indexes to suddenly disappear? Running a btool and indexes list… my primary indexes with all my security logs are just not there. I also... See more...
Missing indexes Any one have a way to investigate what causes indexes to suddenly disappear? Running a btool and indexes list… my primary indexes with all my security logs are just not there. I also have a NFS mount for archival and the logs are missing from there too. Going to the /opt/splunk/var/lib/splunk directory I see the last hot bucket was collected around 9am. I am trying to parse through whatever logs to find out what happened and how to recover.
On Splunk Enterprise 9.2 and  DBConnect 3.17.2 I'm in the process of replacing our old Splunk instance, and with the new version of DBConnect, I seem to be unable to disable SSL ecryption on connect... See more...
On Splunk Enterprise 9.2 and  DBConnect 3.17.2 I'm in the process of replacing our old Splunk instance, and with the new version of DBConnect, I seem to be unable to disable SSL ecryption on connection to the database. It's a Microsoft MS-SQL database. I connect using the generic MS SQL driver. I do not have "Enable SSL" checked, I have encrypt=false in the jdbc URL:       jdbc:sqlserver://phmcmdb01:1433;databaseName=CM_PHE;selectMethod=cursor;encrypt=false       and yet, it cannot connect, throwing the error       "encrypt" property is set to "false" and "trustServerCertificate" property is set to "false" but the driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption: Error: SQL Server did not return a response.       The old system running DBConnect 3.1.4 on Splunk Enterprise 7.3.2 can connect just fine without ssl enabled.  Why is DBConnect insisting on attempting an SSL connection? The SQL server is obviously not requiring it, or the old server would not work. Or is this a false error message and diverting me from some other problem?
Hi Team  Could you please advice why the below query is not showing any data  " `secrpt-active-users($select321$)`"   Thanks 
Hi!  The log in question reads as: HTTP/1.1" 200 365 3 in our splunk, we don't have a "HTTP status" field to pivot off of.. So I see that the HTTP response always shows as it does above, So ... See more...
Hi!  The log in question reads as: HTTP/1.1" 200 365 3 in our splunk, we don't have a "HTTP status" field to pivot off of.. So I see that the HTTP response always shows as it does above, So I'd need a regex that gives me something like: | rex field=HTTP response "   HTTP/1.1" *** 
Hi, I was wondering if anyone knew how I can find the custom source types created by Data Manager Input? I had configured a custom source type or cloudwatch logs but can't seem to find it under the s... See more...
Hi, I was wondering if anyone knew how I can find the custom source types created by Data Manager Input? I had configured a custom source type or cloudwatch logs but can't seem to find it under the source types UI. Is this abstracted away somehow? How can I take a look at how this was configured under the hood?   Thanks
Hi, I was wondering if someone could give me a straightforward breakdown of how I can link dropdown inputs with different panels using tokens. Regards,
This is the second blog in our Splunk Love series. Check out our first one: "Describe Splunk in One Word"! What excites our customers and partners about Splunk? This is a question we are very ... See more...
This is the second blog in our Splunk Love series. Check out our first one: "Describe Splunk in One Word"! What excites our customers and partners about Splunk? This is a question we are very curious about, so we brought it to the Splunk Love video booth at .conf24. Check them out below! Cisco Integration: Unlocking new possibilities The Splunk Community is  excited about the integration of Splunk with Cisco. Folks are looking forward to seeing product integrations come to life and how new  tools like AppDynamics can enhance their capabilities.  Splunk Product Capabilities: Driving the surprise!  Splunk's flexibility in data ingestion allows users to gain a comprehensive understanding of their environment and operations, adapting to various data sources seamlessly. Splunk SOAR automation capabilities received standout mention, having revolutionized the way users handle incident response. Other participants also mentioned how they see valuable outcomes within the first couple months of implementing Splunk Observability. Splunk products continually push the boundaries of what is possible, turning challenges into successes and creating great ‘data hero’ moments for our users. Splunk End-to-End Support: A Continuous Journey Many customers also cited Splunk’s comprehensive end-to-end support as a key factor in their Aha! moments. From the initial onboarding process, guided by the Customer Success team, to ongoing education and community engagement, users feel supported at every step.  These recorded stories  (and more!) underscore the impact of Splunk's value in driving success and fostering growth for our users. Check here for more Aha moments from our participants.  Thank You! We're grateful to everyone who shared their excitement with us. These videos not only highlight the diverse applications and benefits of our solutions but also inspire us to continue innovating and supporting our community. Join the Conversation If you have further feedback and suggestions, please visit Splunk VOC to share your voice and ideas, join customer advisory boards and product preview programs. Your feedback is invaluable to us as we strive to provide the best experience for everyone.   Cheers, Team Splunk   
So I've run into a weird issue where most all my apps show up as a web, and you can see where calls from one app, are made to another app. All except one. In this one, connections to other apps just ... See more...
So I've run into a weird issue where most all my apps show up as a web, and you can see where calls from one app, are made to another app. All except one. In this one, connections to other apps just show up under the "Remote Services" page with the FQDN listed. As such, the dashboard view doesn't correctly link it. Is there a way that I can say that a specific remote service is actually connected to another app? All the documentation I've found tells you how to rename it to a tier within the same app.
Hello, we are trying to see if os version (eg. RHEL6, UBUNTU 6.x) from splunk add-on for linux, we have enabled version.sh script and trying to see how to get this info, currently i am only getting ... See more...
Hello, we are trying to see if os version (eg. RHEL6, UBUNTU 6.x) from splunk add-on for linux, we have enabled version.sh script and trying to see how to get this info, currently i am only getting os_name as linux,  Is this possible to get additional info like RHEL, UBUNTU, please help me out.   Thanks
Hello, Can someone help me with splunk search to see whether IPV6 is enabled on target machines?     Thanks
I have around 10 alerts set up in Slack, and I'm trying to find a way to find the total figure of each alert triggered in the previous month.  I'm using the following:   index="_internal" sourcetyp... See more...
I have around 10 alerts set up in Slack, and I'm trying to find a way to find the total figure of each alert triggered in the previous month.  I'm using the following:   index="_internal" sourcetype="scheduler" thread_id="AlertNotifier*" NOT (alert_actions="summary_index" OR alert_actions="") | search savedsearch_name IN..... | stats count by savedsearch_name | sort -count   This works, and brings up some figures for all 10 alerts, however, for some reason it doesn't seem to be accurate. For example, I know we receive multiple alerts in a day for one particular search query (which is set to fire every 15 mins) and so a count of 23 in the previous month just isn't correct. What am I doing wrong?    Ps I'm a complete newbie here. Thanks in advance!
Hi  Now and again we get an extremely high system load average on the Search Head. I cant figure out why it is happening and I have to do a kill -9 -1 and restart to fix it. This means we ... See more...
Hi  Now and again we get an extremely high system load average on the Search Head. I cant figure out why it is happening and I have to do a kill -9 -1 and restart to fix it. This means we can't log into the Splunk GUI. I kill Splunk and I see a lot of processes. After it is dead, I can still Splunkd process on the box and the load time is still high.   Regards Robert  
Hi All, We have a json logs where few logs are not parsing properly. When i check internal logs its shows that truncate value exceed the default 10000 bytes, so i tried increasing truncate value to 4... See more...
Hi All, We have a json logs where few logs are not parsing properly. When i check internal logs its shows that truncate value exceed the default 10000 bytes, so i tried increasing truncate value to 40000, but still logs are not parsing correctly. the logs length is around  26000. props used: [app:json:logs] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) CHARSET=UTF-8 TIMEPREFIX=\{\"timestamp"\:\" KV_MODE=json TRUNCATE=40000    
Hello, I have this - results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results)   dict = json.loads(oneshotsearch_result... See more...
Hello, I have this - results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results)   dict = json.loads(oneshotsearch_results)  # to get dict to send data outside splunk selectively   Error: TypeError: the JSON object must be str, bytes or bytearray, not ResponseReader   How do I fix this?   Thanks    
We have configured a health rule in AppDynamics to monitor storage usage across all Servers. (Hardware Resources|Volumes|/|Used (%))The rule is set to trigger a Slack notification when the root stora... See more...
We have configured a health rule in AppDynamics to monitor storage usage across all Servers. (Hardware Resources|Volumes|/|Used (%))The rule is set to trigger a Slack notification when the root storage exceeds the 80% warning and 90% critical threshold. While the rule violation is correctly detected for all nodes, for 2 of the VMs which crossing 90% above but alerts are sent for one VM. We need assistance in ensuring that alerts are triggered and sent for all affected nodes. Please also see attached screenshots.       
Here is an old post from 2019 that was unanswered. https://community.splunk.com/t5/Deployment-Architecture/Remove-missing-duplicate-forwarders-from-forwarder-managment/m-p/492211 I am running into ... See more...
Here is an old post from 2019 that was unanswered. https://community.splunk.com/t5/Deployment-Architecture/Remove-missing-duplicate-forwarders-from-forwarder-managment/m-p/492211 I am running into the same issue. Splunk Enterprise 9.2.2. Basically we had maybe 400+ machines with version 9.0.10. When upgrading to a newer splunkforwarder 9.2.2 under Forwarder Management there is duplicate instances of the computers. Pushing our Clients now to above 800. How can you remove the duplicates with going through each duplicate and clicking delete Record? Thanks