All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

On Splunk Enterprise 9.2 and  DBConnect 3.17.2 I'm in the process of replacing our old Splunk instance, and with the new version of DBConnect, I seem to be unable to disable SSL ecryption on connect... See more...
On Splunk Enterprise 9.2 and  DBConnect 3.17.2 I'm in the process of replacing our old Splunk instance, and with the new version of DBConnect, I seem to be unable to disable SSL ecryption on connection to the database. It's a Microsoft MS-SQL database. I connect using the generic MS SQL driver. I do not have "Enable SSL" checked, I have encrypt=false in the jdbc URL:       jdbc:sqlserver://phmcmdb01:1433;databaseName=CM_PHE;selectMethod=cursor;encrypt=false       and yet, it cannot connect, throwing the error       "encrypt" property is set to "false" and "trustServerCertificate" property is set to "false" but the driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption: Error: SQL Server did not return a response.       The old system running DBConnect 3.1.4 on Splunk Enterprise 7.3.2 can connect just fine without ssl enabled.  Why is DBConnect insisting on attempting an SSL connection? The SQL server is obviously not requiring it, or the old server would not work. Or is this a false error message and diverting me from some other problem?
Hi Team  Could you please advice why the below query is not showing any data  " `secrpt-active-users($select321$)`"   Thanks 
Hi!  The log in question reads as: HTTP/1.1" 200 365 3 in our splunk, we don't have a "HTTP status" field to pivot off of.. So I see that the HTTP response always shows as it does above, So ... See more...
Hi!  The log in question reads as: HTTP/1.1" 200 365 3 in our splunk, we don't have a "HTTP status" field to pivot off of.. So I see that the HTTP response always shows as it does above, So I'd need a regex that gives me something like: | rex field=HTTP response "   HTTP/1.1" *** 
Hi, I was wondering if anyone knew how I can find the custom source types created by Data Manager Input? I had configured a custom source type or cloudwatch logs but can't seem to find it under the s... See more...
Hi, I was wondering if anyone knew how I can find the custom source types created by Data Manager Input? I had configured a custom source type or cloudwatch logs but can't seem to find it under the source types UI. Is this abstracted away somehow? How can I take a look at how this was configured under the hood?   Thanks
Hi, I was wondering if someone could give me a straightforward breakdown of how I can link dropdown inputs with different panels using tokens. Regards,
So I've run into a weird issue where most all my apps show up as a web, and you can see where calls from one app, are made to another app. All except one. In this one, connections to other apps just ... See more...
So I've run into a weird issue where most all my apps show up as a web, and you can see where calls from one app, are made to another app. All except one. In this one, connections to other apps just show up under the "Remote Services" page with the FQDN listed. As such, the dashboard view doesn't correctly link it. Is there a way that I can say that a specific remote service is actually connected to another app? All the documentation I've found tells you how to rename it to a tier within the same app.
Hello, we are trying to see if os version (eg. RHEL6, UBUNTU 6.x) from splunk add-on for linux, we have enabled version.sh script and trying to see how to get this info, currently i am only getting ... See more...
Hello, we are trying to see if os version (eg. RHEL6, UBUNTU 6.x) from splunk add-on for linux, we have enabled version.sh script and trying to see how to get this info, currently i am only getting os_name as linux,  Is this possible to get additional info like RHEL, UBUNTU, please help me out.   Thanks
Hello, Can someone help me with splunk search to see whether IPV6 is enabled on target machines?     Thanks
I have around 10 alerts set up in Slack, and I'm trying to find a way to find the total figure of each alert triggered in the previous month.  I'm using the following:   index="_internal" sourcetyp... See more...
I have around 10 alerts set up in Slack, and I'm trying to find a way to find the total figure of each alert triggered in the previous month.  I'm using the following:   index="_internal" sourcetype="scheduler" thread_id="AlertNotifier*" NOT (alert_actions="summary_index" OR alert_actions="") | search savedsearch_name IN..... | stats count by savedsearch_name | sort -count   This works, and brings up some figures for all 10 alerts, however, for some reason it doesn't seem to be accurate. For example, I know we receive multiple alerts in a day for one particular search query (which is set to fire every 15 mins) and so a count of 23 in the previous month just isn't correct. What am I doing wrong?    Ps I'm a complete newbie here. Thanks in advance!
Hi  Now and again we get an extremely high system load average on the Search Head. I cant figure out why it is happening and I have to do a kill -9 -1 and restart to fix it. This means we ... See more...
Hi  Now and again we get an extremely high system load average on the Search Head. I cant figure out why it is happening and I have to do a kill -9 -1 and restart to fix it. This means we can't log into the Splunk GUI. I kill Splunk and I see a lot of processes. After it is dead, I can still Splunkd process on the box and the load time is still high.   Regards Robert  
Hi All, We have a json logs where few logs are not parsing properly. When i check internal logs its shows that truncate value exceed the default 10000 bytes, so i tried increasing truncate value to 4... See more...
Hi All, We have a json logs where few logs are not parsing properly. When i check internal logs its shows that truncate value exceed the default 10000 bytes, so i tried increasing truncate value to 40000, but still logs are not parsing correctly. the logs length is around  26000. props used: [app:json:logs] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) CHARSET=UTF-8 TIMEPREFIX=\{\"timestamp"\:\" KV_MODE=json TRUNCATE=40000    
Hello, I have this - results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results)   dict = json.loads(oneshotsearch_result... See more...
Hello, I have this - results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results)   dict = json.loads(oneshotsearch_results)  # to get dict to send data outside splunk selectively   Error: TypeError: the JSON object must be str, bytes or bytearray, not ResponseReader   How do I fix this?   Thanks    
We have configured a health rule in AppDynamics to monitor storage usage across all Servers. (Hardware Resources|Volumes|/|Used (%))The rule is set to trigger a Slack notification when the root stora... See more...
We have configured a health rule in AppDynamics to monitor storage usage across all Servers. (Hardware Resources|Volumes|/|Used (%))The rule is set to trigger a Slack notification when the root storage exceeds the 80% warning and 90% critical threshold. While the rule violation is correctly detected for all nodes, for 2 of the VMs which crossing 90% above but alerts are sent for one VM. We need assistance in ensuring that alerts are triggered and sent for all affected nodes. Please also see attached screenshots.       
Here is an old post from 2019 that was unanswered. https://community.splunk.com/t5/Deployment-Architecture/Remove-missing-duplicate-forwarders-from-forwarder-managment/m-p/492211 I am running into ... See more...
Here is an old post from 2019 that was unanswered. https://community.splunk.com/t5/Deployment-Architecture/Remove-missing-duplicate-forwarders-from-forwarder-managment/m-p/492211 I am running into the same issue. Splunk Enterprise 9.2.2. Basically we had maybe 400+ machines with version 9.0.10. When upgrading to a newer splunkforwarder 9.2.2 under Forwarder Management there is duplicate instances of the computers. Pushing our Clients now to above 800. How can you remove the duplicates with going through each duplicate and clicking delete Record? Thanks
Hi All, What are the licenses and subscription required for Lambda Monitoring in AppDynamics. Our requirement is to monitor Microservices in Lambda. The technology used is Node Js. As per below com... See more...
Hi All, What are the licenses and subscription required for Lambda Monitoring in AppDynamics. Our requirement is to monitor Microservices in Lambda. The technology used is Node Js. As per below community answer this doesn't require APM license and only requires AppDynamics Serverless APM for AWS Lambda https://community.appdynamics.com/t5/Licensing-including-Trial/How-does-licensing-work-when-instrumenting-AppD-and-lambda/m-p/38605#M545 But, I also could find the below comment in documentation (https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/install-app-server-agents/serverless-apm-for-aws-lambda/subscribe-to-serverless-apm-for-aws-lambda) An AppDynamics Premium or Enterprise license, using either the Agent-based Licensing model or the Infrastructure-based Licensing model. Please provide clarity on this, If APM license is required or not. Thanks Fadil
Process transaction locally [idempotencyId=27cb55d0-3844-4e8f-8c4b-867ed64610a220240821034250387S39258201QE, deliveringApplication=MTNA0002, orderId=8e1d1fc0-5fe2-4643-bc1f-12debe6a7a06]     i wou... See more...
Process transaction locally [idempotencyId=27cb55d0-3844-4e8f-8c4b-867ed64610a220240821034250387S39258201QE, deliveringApplication=MTNA0002, orderId=8e1d1fc0-5fe2-4643-bc1f-12debe6a7a06]     i would like to extract Order Id from above sample data  which is = 8e1d1fc0-5fe2-4643-bc1f-12debe6a7a06   Pls suggest
Hello everyone ,  I want to filter data for a specific keyword "Snapshot created successfully " from a log file but i am getting other events also along with the searched keywords. My entries in pr... See more...
Hello everyone ,  I want to filter data for a specific keyword "Snapshot created successfully " from a log file but i am getting other events also along with the searched keywords. My entries in props.conf and transform.conf is as below :   props.conf [sourcetype] TRANSFORMS-filter = stanza transforms.conf [stanza] REGEX = "Snapshot created successfully" DEST_KEY = queue FORMAT = indexqueue Is there any issue here ?
Hello, How do I "Left join" by appending CSV to an index in multiple fields? I was able to solve the problem, but 1) Is it possible to solve this problem without string manipulation and mvexpand... See more...
Hello, How do I "Left join" by appending CSV to an index in multiple fields? I was able to solve the problem, but 1) Is it possible to solve this problem without string manipulation and mvexpand? (see the code) Mvexpand caused slowness 2) Can "stats value" NOT remove the duplicate?     In this case, stats values (*) as * by ip, it merged field "risk and "score" and removed the duplicates. My workaround is to combine the string to retain the duplicates. 3) a) Why does "stats value" ignore empty string?         b)  Why adding Null into non-null string will result empty?   I have to use fillnull in order to retain the data. Please review the sample data, drawing and the code Thank you for your help.!! host.csv ip_address host 10.1.1.1 host1 10.1.1.2 host2 10.1.1.3 host3 10.1.1.4 host4 10.1.1.5 host5 10.1.1.6 host6 10.1.1.7 host7 index=risk ip risk score contact 10.1.1.1 riskA 6   10.1.1.1 riskB 7   10.1.1.1     person1 10.1.1.1 riskC 6   10.1.1.2     person2 10.1.1.3 riskA 6 person3 10.1.1.3 riskE 7 person3 10.1.1.4 riskF 8 person4 10.1.1.8 riskA 6 person8 10.1.1.9 riskB 7 person9 "Left join" expected output - yellow and green rectangle (see drawing below) ip host risk score contact 10.1.1.1 host1 riskA 6   10.1.1.1 host1 riskB 7   10.1.1.1 host1     person1 10.1.1.1 host1 riskC 6   10.1.1.2 host2     person2 10.1.1.3 host3 riskA 6 person3 10.1.1.3 host3 riskE 7 person3 10.1.1.4 host4 riskF 8 person4 10.1.1.5 host5       10.1.1.6 host6       10.1.1.7 host7             | makeresults format=csv data="ip_address, host 10.1.1.1, host1 10.1.1.2, host2 10.1.1.3, host3 10.1.1.4, host4 10.1.1.5, host5 10.1.1.6, host6 10.1.1.7, host7" | eval source="csv" | rename ip_address as ip | append [makeresults format=csv data="ip, risk, score, contact 10.1.1.1, riskA, 6, , 10.1.1.1, riskB, 7 , 10.1.1.1, ,, person1, 10.1.1.1, riskC, 6,, 10.1.1.2, ,, person2, 10.1.1.3, riskA, 6, person3, 10.1.1.3, riskE, 7, person3, 10.1.1.4, riskF, 8, person4, 10.1.1.8, riskA, 6, person8, 10.1.1.9, riskB, 7, person9" | fillnull score value=0 | fillnull risk, score, contact value="N/A" | eval source="index"] | eval strmerged = risk + "," + score + "," + contact | stats values(*) as * by ip | mvexpand strmerged | eval temp = split(strmerged,",") | eval risk = mvindex(temp, 0) | eval score = mvindex(temp, 1) | eval contact = mvindex(temp, 2) | search (source="csv" AND source="index") OR (source="csv") | table ip, host, risk, score, contact        
Hello Guys, I wonder if there's any query that can list the mapping information between the existing  data models and indexes? I would like to use these info to set index constrains for data models ... See more...
Hello Guys, I wonder if there's any query that can list the mapping information between the existing  data models and indexes? I would like to use these info to set index constrains for data models to speed up searching. Thanks & Regards, Iris
So my manager needs to verify who was on call for certain days in order to pay them appropriately. Generally I would think there was some basic way to do this with Splunk on call.. However, it app... See more...
So my manager needs to verify who was on call for certain days in order to pay them appropriately. Generally I would think there was some basic way to do this with Splunk on call.. However, it appears that there is no way to do this (to my knowledge) Our company pays approx 60K USD for this service and I have to come here in order to ask a question and get support because when I attempt to log a ticket , the form cannot populate the instance section preventing me from submitting it. (separate issue - likely a dark pattern to avoid dealing with customer concerns as much as possible) Things I've tried - viewing the schedule.. nope only show the current week Getting a report, SHIRLYY this will work - turns out no, its just a summary of hours , lovely no dates attached I know! importing the .ics file into my calendar that hasss to work...Yet again nothing, zero , donuts... no historical data How on earth can I get a simple historical report saying who was actually on call for my schedule for what dates..