All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, the following pic shows the chart in the left hand side,  i want a drilldown based on time when i click on the graph. for example when i click on the spike value-49, i should get all the values ... See more...
Hi, the following pic shows the chart in the left hand side,  i want a drilldown based on time when i click on the graph. for example when i click on the spike value-49, i should get all the values for that time when the spike has happened. TIA.  
Hello, Anyone helps out, by seeing the splunkd logs 11-02-2020 16:13:51.870 +1100 WARN  CMMasterProxy - Master is down! Make sure pass4SymmKey is matching if master is running. 11-02-2020 16:13:51... See more...
Hello, Anyone helps out, by seeing the splunkd logs 11-02-2020 16:13:51.870 +1100 WARN  CMMasterProxy - Master is down! Make sure pass4SymmKey is matching if master is running. 11-02-2020 16:13:51.870 +1100 ERROR CMSlave - Waiting for master to come up... (retrying every second) We have two indexers and one search head and one master(deployment server) so one of the indexer 1 is not restarting the splunkd services. Please one this priority, how to up the Splunkd services in indexer 1.
Good Morning, after finally getting the Windows infrastructure app to produce information i now have a new issue and that is that it is ingesting far to much data so much so that it is going over my ... See more...
Good Morning, after finally getting the Windows infrastructure app to produce information i now have a new issue and that is that it is ingesting far to much data so much so that it is going over my license. Over the weekend it ingested quite a bit and i need to get it trimmed down quick smart. Before we only used to monitor logon and logoffs and some group policy changes.   Does anyone have a trimmed down to bare bones input.conf for the app please. i went through it and disabled what i dont want but im not sure its working. also i read somewhere that this app doesnt need to be on all the domain controllers, is that right?
Hi Team, We have deployed Splunk Cloud in our environment. So when I tried to extract the fields for Wineventlog source i.e. (Actually it is having more than 45 + Lines in it) but when i tried to ex... See more...
Hi Team, We have deployed Splunk Cloud in our environment. So when I tried to extract the fields for Wineventlog source i.e. (Actually it is having more than 45 + Lines in it) but when i tried to extract the field using GUI it is not showing up the complete events. Instead I can see only 12 to 13 lines of the original event and the remaining lines are missing in the Field Extractions tab. So how to show up all the events during the Field Extractions in Splunk Cloud Search head? Do we have any other options to extract the same. Since its a requirement to extract all the fields in those particular logs.   Hence Kindly help on the request.  
I am relatively new to Splunk as it is really used. My previous usage has all been ad hoc when it was made available to me for log analysis after an "event". That usage was mostly the equivalent of ... See more...
I am relatively new to Splunk as it is really used. My previous usage has all been ad hoc when it was made available to me for log analysis after an "event". That usage was mostly the equivalent of egrep + regular expressions.  Splunk and I got the job done. I am finally in a place where all the features of Splunk are being ( or are intended to be ) used.  I am being asked if Splunk can function as a web application security testing tool - ala BurpSuite or ZAP or Nikto or the like. My take is no - that Splunk can perform analysis functions after the fact that can potentially reveal and alert on web application security issues - but it is not a substitute for dedicated tools of the sort previously mentioned. Have I got this right? Thanks!    
Does Appdynamics support the Informix Database for JDBC or Database Monitoring?
I want to create a splunk webhook that sends alerts to teams. With this search I dont want to receive emails in that search. Currently the search I am using is index="audit_log" sourcetype="aws:cloud... See more...
I want to create a splunk webhook that sends alerts to teams. With this search I dont want to receive emails in that search. Currently the search I am using is index="audit_log" sourcetype="aws:cloudwatchlogs" source="*" ssh NOT ((undefined)) curl. That search returns https://outlook. I am currently struggling with not having that return in the search. I dont want to receive emails for the webhook that goes to an outlook webhook using curl. If anyone knows what to search that would really help alot. 
I would like to use https://pypi.org/project/keyring/ in my custom app. I've been able to do it on Linux but unable to on Windows. I need pip to install it. I tried calling python from the Splunk pr... See more...
I would like to use https://pypi.org/project/keyring/ in my custom app. I've been able to do it on Linux but unable to on Windows. I need pip to install it. I tried calling python from the Splunk provided one, but it won't work. I ended up installing the latest Python on my server to pull it. Even when installed and I install keyring, it can't be found. I've tried using sys.path.append but has not helped. I've tried placing the downloaded package (from C:\Users\{username}\AppData\Local\Programs\Python\Python39\Lib\site-packages) in my bin folder. When I run splunk cmd python myscript.py, I get Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\customapp1\bin\myscript.py", line 10, in <module> import keyring File "C:\Program Files\Splunk\etc\apps\customapp1\bin\keyring\__init__.py", line 1, in <module> from .core import ( File "C:\Program Files\Splunk\etc\apps\customapp1\bin\keyring\core.py", line 10, in <module> from . import backend File "C:\Program Files\Splunk\etc\apps\customapp1\bin\keyring\backend.py", line 42 class KeyringBackend(metaclass=KeyringBackendMeta): ^ SyntaxError: invalid syntax I'll also note that splunk cmd python myscript.py only works if I specify the scripts full path. (This has no trouble on my Linux one.) Otherwise, I get python: can't open file 'myscript.py': [Errno 2] No such file or directory. I have a Windows environment variable named "SPLUNK_HOME" set to "C:\Program Files\Splunk".
So i have a possibly unique requirement, i'm trying to split up so log data but i have a string in one field that contains numerous peices of information both numeric and character based. The numeric... See more...
So i have a possibly unique requirement, i'm trying to split up so log data but i have a string in one field that contains numerous peices of information both numeric and character based. The numeric fields are of fixed length which i'm able to split without isssue BUT in the string there is also a character value (persons name). Now the issue i have is that the name part of the string is not fixed length it's based on the name length so doing a split by character number is not going to produce the results. What i wanted to ask is, Is it possible to split a string between the end of a character value and a numeric one, example would be: From this: ** Update ** I think it may be best to post a string that is more inline to what i want to do. So the current field data i have looks like this: 1234567801D52411021103100001860000CF19John Doe01D5232102110265000159000001D5231202110265000103400601D53324021103100000000005 Single string but in this string is multiple pieces of information. What i want to do is extract the Name (John Doe) and use the values that remains as they are of ixed length. My problem is with removing the name as that is variable length and makes it hard to split.  Thanks in advance.    
I'm using the splunk plugin for jenkins and splunk app for jenkins. I'm unable to get the logs in jenkins_artifact index but I can see logs in jenkins, jenkins_console, jenkins_statistics. How do I g... See more...
I'm using the splunk plugin for jenkins and splunk app for jenkins. I'm unable to get the logs in jenkins_artifact index but I can see logs in jenkins, jenkins_console, jenkins_statistics. How do I get logs into jenkins_artifact? What type of logs are written to this index? Is there documentation that I can read about?
Hi All, I need some advice or help, so I have 2 index I'd like to join but it seems not working as I expected : index a name info person1 aa-bb-cc person2 bb-cc-dd person3 cc-dd... See more...
Hi All, I need some advice or help, so I have 2 index I'd like to join but it seems not working as I expected : index a name info person1 aa-bb-cc person2 bb-cc-dd person3 cc-dd-ee thing1 dd-ee-ff   index b identifier note aabbcc this is good bbccdd this is bad ccddee this is good   Id like to make the result below name info note person1 aa-bb-cc this is good person2 bb-cc-dd this is bad person3 cc-dd-ee this is good   What I currently have  is: index=a | search name=person* | eval identifier=replace(info, "-","") | join type=outer identifier [search index=b] | table name info note   But I still find the result "note" field is empty/null did I miss something in this search  ?
I want to get the ticket count for the aging of backlog tickets on a weekly basis. The aging of the ticket depends on the date the ticket was opened. The count should be calculated every Sunday... See more...
I want to get the ticket count for the aging of backlog tickets on a weekly basis. The aging of the ticket depends on the date the ticket was opened. The count should be calculated every Sunday, and the tickets should be divided depending upon the ranges mentioned, i.e. 0-5 days,6-15 and so on.
We came to know that splunk custom alerts are not working after the upgradation to 8.0.1v.also receiving searches delayed error on the cluster..can any one help with this ?
Is it possible to dynamically set a token name to show/hide panels, based on the value of the input? I have a dashboard with panels that are specific to certain applications, but irrelevant to other... See more...
Is it possible to dynamically set a token name to show/hide panels, based on the value of the input? I have a dashboard with panels that are specific to certain applications, but irrelevant to others. I want to hide the irrelevant panels when they don't apply.  Such as: <input type="text" searchWhenChanged="true" >       <default>$service_name$</default>                <change>                         <set token="view_$value$"></set>                </change> </input> Results in: "Invalid token name: "viewxml_$value$""  Im trying to NOT hard-code service names, because there are many, with a <condition> if possible.       
Hello Folks, I have data in JSON format (data.json). I want to visualize the data by creating a dashboard in Splunk Enterprise. Due to my company structure, I can only use the HTTP event collector (... See more...
Hello Folks, I have data in JSON format (data.json). I want to visualize the data by creating a dashboard in Splunk Enterprise. Due to my company structure, I can only use the HTTP event collector (HEC) to send data to Splunk Enterprise. Can anyone please help me with the python based script if you have any template where I have to just enter the token key and URL to make it happen. Please help me as I need it on a quicker basis as it is super important for my project.    Thank you.
Hello, i want to log my windows nps (network protection server or radius) to splunk. I found this thread. https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-parse-Radius-log-files-into-splu... See more...
Hello, i want to log my windows nps (network protection server or radius) to splunk. I found this thread. https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-parse-Radius-log-files-into-splunk-What-the-configuration/td-p/351275 But my data is not forwarded to my splunk server. I installed the universal forwarder on the radius server and deployed the default windows app. My windows eventlogs are forwarded from the radius to the splunk server. But not das Radius Log file. I created an app with inputs.conf and pros.conf like in the thread above. The inputs.conf file   [monitor://C:\Windows\System32\LogFiles] sourcetype = ias index = radius disabled = 0 whitelist=IN.*\.log alwaysOpenFile=1     How can i find out, why the data is not forwarded ?
Hello Splunkers,   This might just be a sanity check....but I'll ask anyway.   Deployed Stream yesterday to around ~360 hosts (all Windows - need to run the setpermissions on the nix before they'... See more...
Hello Splunkers,   This might just be a sanity check....but I'll ask anyway.   Deployed Stream yesterday to around ~360 hosts (all Windows - need to run the setpermissions on the nix before they'll come up).  Everything set to estimate except DNS which is rolled out to all hosts. I can see a nice, constant flow of data into the stream index, ES is triggering notables, everything seems like it is working nicely. I check out the Stream forwarder status and I'm bouncing around from 100-300 hosts with an error status over the last hour (this is constant since I've deployed) a fairly constant active of between 50-80 and a couple in a warning. When I check out the internal logs I see this: Unable to ping server (8f938d78-0c1b-43a6-b32c-e6e094e7bc2b): /en-us/custom/splunk_app_stream/ping/ status=502 Checked we can ping that.  Also, check and these same hosts have data, in fact ALL hosts have data. As I look in the Stream SH I see these corrosponding errors 10-31-2020 16:21:51.375 +0000 ERROR HttpClientRequest - HTTP client error=Read Timeout while accessing server=http://127.0.0.1:8065 for request=http://127.0.0.1:8065/en-us/custom/splunk_app_stream/ping/.   I'm thinking this is a resource issue on the Stream SH, but looking at the stats in MC everything is fine - 30% CPU. The Stream SH is also the monitoring console and not a big box - 8 cores/16GB ram, RHEL8.  It's doing nothing other than Stream and MC and no one is using this. Are there any limits that might cause this? The 502 seems to me to indicate the MC/Stream server is the cause, but before I throw more cores to see it would be nice to confirm.  Any help appreciated!   Cheers!
Hello Splunkers, We are receiving config notifications, CloudTrail and others from AWS through Kinesis - the general pattern is: Config -> Event Rules -> Event Hub -> Kinesis -> HEC (indexer cluste... See more...
Hello Splunkers, We are receiving config notifications, CloudTrail and others from AWS through Kinesis - the general pattern is: Config -> Event Rules -> Event Hub -> Kinesis -> HEC (indexer cluster) This works, what seems flawlessly. We separate the data into indexes based on the account number due to security policies so I've set up props/transforms to do this. This seems to work most of the time and 77% of the traffic ends up in the correct index but there is 33% (on avg) that ends up in the default index of the HEC (aws_config) Is it possible that the transforms aren't triggering all the time? The events are identical format; sourcetype and source are identical. Here is the transforms: [aws-account1] REGEX = 010016492034 DEST_KEY = _MetaData:Index FORMAT = aws-account1 Props: [aws:config:notification] TRANSFORMS-aws_config_notification=aws-account1 Am I missing something here? Is there anything I should look for in internal? I remember years ago in training an example where props hierarchy would mess with data when there was multiple props/transforms, and the intermittent nature *might* make sense but I have no idea where to troubleshoot this. The HEC is on a cluster of indexers so the config is all via the CM, thus no differences. Any suggestions would be greatly appreciated! Cheers!
Hi, I'm new to Splunk & just getting used to it. I'm trying to search for Windows event logs relative to the "TargetUserName" field in the logs. I'm trying to run a search that shows me user account... See more...
Hi, I'm new to Splunk & just getting used to it. I'm trying to search for Windows event logs relative to the "TargetUserName" field in the logs. I'm trying to run a search that shows me user accounts that have had two different event logs associated with it in a 7 day period.   The search i'm looking to run is: if a user has had event code 4724 generated and then has event code 4740 occur within 7 days after code 4724 was seen. I was thinking i'd have to define the user name as variable that bring back the results if the event code conditions match (as described above.   Or could there be a better way of going about this?   Any help is appreciated. Thanks.