All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,  I did the splunk ES installation following all the steps noted here - https://docs.splunk.com/Documentation/ES/7.3.2/Install/InstallEnterpriseSecurity i did all the steps and now when t... See more...
Hello,  I did the splunk ES installation following all the steps noted here - https://docs.splunk.com/Documentation/ES/7.3.2/Install/InstallEnterpriseSecurity i did all the steps and now when trying to find those index, even on the /opt/splunk/etc/apps/SplunkEnterpriseSecuritSuite/local or default and there is no indexes.conf, with in them I am trying to find index=notable, notable_summary, risk to see notable events from correlation search  How am i supposed to get these indexes in apps inside ES, like shown here as well.  https://docs.splunk.com/Documentation/ES/7.3.2/Install/Indexes Any help would be appreciated
I've got two servers providing me temperature data. Host A has Sensor1 and Sensor2. Host B has Sensor1 and Sensor2.  My goal is a line graph of all four sensors named as their actual room name. As ... See more...
I've got two servers providing me temperature data. Host A has Sensor1 and Sensor2. Host B has Sensor1 and Sensor2.  My goal is a line graph of all four sensors named as their actual room name. As long as I use host=HostA in the base search, my timechart works great with 20min avg.  index=tempmon sourcetype=tempdata host=HostA | timechart span=20min eval(round(avg(Sensor1),2)) as "Room12", eval(round(avg(Sensor2),2)) as "Room13" I'm struggling to understand if a subsearch or 'where' statement would help do something like this index=tempmon sourcetype=tempdata Where host=HostA | eval Room12=Sensor1 | eval Room13=Sensor2 Where host=HostB | eval Room14=Sensor1 | eval Room15=Sensor2 | timechart span=20min avg(Room12), avg(Room13), avg(Room14), avg(Room15)  
I am trying to ingest Linux logs into Splunk.  1. I have deployed the unix_TA through the deployment server to the Heavy forwarder and to the universal forwarder with the inputs. conf defined in the... See more...
I am trying to ingest Linux logs into Splunk.  1. I have deployed the unix_TA through the deployment server to the Heavy forwarder and to the universal forwarder with the inputs. conf defined in the Local directory. The indexes are defined in the inputs.conf as well. 2. The Universal forwarder has confirmed that the TA is found in the /opt/splunkuniversal forwarder/apps directory with the inputs.conf as deployed. 3. permissions have been granted to the Splunkfwd on the universal forwarder on the Linux server to read var/log . 4. The TA is also installed on the Search Head. I am able to see the metric logs in the _internal index. However I do not see the event logs. I have run a tcp dump on the heavy forwarder's CLI  and have confirmed that there are logs coming in. Any ideas on what I am missing?
Hi  I have a xml response in splunk whenever i query a index.I used to get the error msg in    </soap:Envelope>", RESPONSE="<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/enve... See more...
Hi  I have a xml response in splunk whenever i query a index.I used to get the error msg in    </soap:Envelope>", RESPONSE="<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Header/> <soapenv:Body> <soapenv:Fault xmlns:trefault="http://tresoap.intecbilling.com/fault/2.0"> <faultcode>trefault:ApplicationException</faultcode> <faultstring><CM-41398> ERROR: Value &quot;Apple Watch 4G 5GB&quot; supplied for Fact &quot;OrderedComp.RatePlan_R&quot; is not allowed by the fact&apos;s filter search or expression</faultstring> <detail> <trefault:Detail> <trefault:Message><CM-41398> ERROR: Value &quot;Apple Watch 4G 5GB&quot; supplied for Fact &quot;OrderedComp.RatePlan_R&quot; is not allowed by the fact&apos;s filter search or expression</trefault:Message> <trefault:ErrorId>41398</trefault:ErrorId> </trefault:Detail> </detail> </soapenv:Fault> </soapenv:Body>   Can someone tell me how to extract this error msg from the xml and display the error msg in a seperate panel as table in dashboard 
Hi all I'm trying to make an in-dashboard menu (not the app top menu). We have a few main dashboards, each links to more specific dashboards. I'd like a menu at the top, something like... Overview ... See more...
Hi all I'm trying to make an in-dashboard menu (not the app top menu). We have a few main dashboards, each links to more specific dashboards. I'd like a menu at the top, something like... Overview | today | yesterday | this week | last week Each item has an interaction set to jump to another dashboard. I've created a single value element, set some text and added an interaction. That all works fine, but the "inspect fullscreen refresh" menu keeps poping up in the way of mouse clicks, see screen shot. a) is there a way to hide this menu for specific elements or in general? b) any other suggestions on how I might make an in-page menu?   I'm using Dashboard Studio in Grid layout... hopefully the answer isn't "use classic"
I have a powershell script running get-brokersession which then exports the results to a txt file.   The file is then forwarded via the Universal Forwarder.     Trying to create a search that bases t... See more...
I have a powershell script running get-brokersession which then exports the results to a txt file.   The file is then forwarded via the Universal Forwarder.     Trying to create a search that bases the output data via the session key.   The Citrix add-on app is not allowed at our location.
We would like to create a dashboard with a table showing the top 10 MQ queues based on their current queue length. This is based on the MQ extension which delivers the custom metrics as expected. 1... See more...
We would like to create a dashboard with a table showing the top 10 MQ queues based on their current queue length. This is based on the MQ extension which delivers the custom metrics as expected. 1. with Dashboard & Reports, there is no table widget available 2. with an Analytics Dashboard, it seems that accessing (custom) metrics with ADQL is not possible. Any solution to this?
Hello All , Greetings   I am looking for perfect explanation of memk() function used with convert statement , how it works and where to pass the m,g,k (The letter k indicates kilobytes, m indic... See more...
Hello All , Greetings   I am looking for perfect explanation of memk() function used with convert statement , how it works and where to pass the m,g,k (The letter k indicates kilobytes, m indicates megabytes, and g indicates gigabytes) . when i am trying this function to convert kb to KB , i am not seeing any change in values . Please help  index=_internal source="*metric*" |convert memk(kb) as KB |table kb , KB     Thanks Manish Kumar      
Hi, we are currently experiencing reliability issues when using the Microsoft Teams Add-on for Splunk  (https://splunkbase.splunk.com/app/4994 The renewal of the Azure Subscription, which sho... See more...
Hi, we are currently experiencing reliability issues when using the Microsoft Teams Add-on for Splunk  (https://splunkbase.splunk.com/app/4994 The renewal of the Azure Subscription, which should take place every 24h does not work sometimes and will not start again unless we create new inputs (subscription, webhook, call records). I did not find an error message regarding this in the logs. We build an alert for this problem.  We use the TA from a HF in the DMZ. So it is possible that we missed a FW-rule for one of Microsofts Graph IPs. The problem does not appear in regular intervals. Rarely the webhook will crash, requiring a restart of the Splunk process.  Has anyone experienced similar issue and has a solution to this problem?
I have tried to solve this problem with all the combinations, but missing some key thing on how to resolve. I have various logs coming with source pattern as /var/log/containers/*. I would like to d... See more...
I have tried to solve this problem with all the combinations, but missing some key thing on how to resolve. I have various logs coming with source pattern as /var/log/containers/*. I would like to drop the DEBUG logs and hence have the following in props.conf: [source://var/log/containers/*] TRANSFORMS-null = debug_to_null and in transforms.conf:   [debug_to_null] REGEX = DEBUG DEST_KEY = queue FORMAT = nullQueue After making the above change, as expected the logs with DEBUG keyword is getting dropped.   Now, I would also like to drop logs with another pattern for a particular source pattern under /var/log/containers, so I've updated my props.conf like this:   [source::/var/log/containers/*_integration-business*.log] TRANSFORMS-null = setnull [source://var/log/containers/*] TRANSFORMS-null = debug_to_null   and updated transforms.conf like this: [debug_to_null] REGEX = DEBUG DEST_KEY = queue FORMAT = nullQueue [setnull] REGEX = NormalizedApiId failed to resolve DEST_KEY = queue FORMAT = nullQueue   After making this change, I can see only logs with DEBUG keyword is getting dropped, however the logs with NormalizedApiId failed to resolve are still being ingested. I was hoping that logs with DEBUG keyword from all source paths with /var/log/containers/* pattern will be dropped and NormalizedApiId failed to resolve keyword from a particular source path with /var/log/containers/*_integration-business*.log pattern will be dropped. But seems not working that way. Please guide me on this.
I take a log using Python's print statement in lambda and save it in the cloud-watch log group. The log group is being collected in Splunk's splunk add-on for aws app. However, some logs are collec... See more...
I take a log using Python's print statement in lambda and save it in the cloud-watch log group. The log group is being collected in Splunk's splunk add-on for aws app. However, some logs are collected, but some are not. collected, 1. INIT_START Runtime Version: ~ 2. START RequestId: ~ 3. END RequestId: ~ 4. REPORT RequestId: ~ not collected, 1. Log I took with print statement Have you been through the same situation as me or have a solution to a similar situation?
I have the following csv file:     id,name,age,male 1,lily,10,girl 2,bob,12,boy 3,lucy,12,girl 4,duby,10,boy 5,bob,11,boy 6,bob,10,boy 7,lucy,11,girl     Now, I want to use splunk to count the ... See more...
I have the following csv file:     id,name,age,male 1,lily,10,girl 2,bob,12,boy 3,lucy,12,girl 4,duby,10,boy 5,bob,11,boy 6,bob,10,boy 7,lucy,11,girl     Now, I want to use splunk to count the number of times each name is repeated, and the result after counting should be as follows:     id,name,age,male,result 1,lily,10,girl,1 2,bob,12,boy,3 3,lucy,12,girl,2 4,duby,10,boy,1 5,bob,11,boy,3 6,bob,10,boy,3 7,lucy,11,girl,2       How can I use SPL to accomplish this task?  
Hi Team, I have a dashboard with 7 panels I need an alert to monitor the dashboard and alert us if any one of the panel shows percentage is > 10 Is there a possibility to create alert with the das... See more...
Hi Team, I have a dashboard with 7 panels I need an alert to monitor the dashboard and alert us if any one of the panel shows percentage is > 10 Is there a possibility to create alert with the dashboard link?
index=db OR index=app | eval join=if(index="db",processId,pid) | stats sum(rows) sum(cputime) by join Above is simple example how to join two indexes. But how to join two indexes where the key ... See more...
index=db OR index=app | eval join=if(index="db",processId,pid) | stats sum(rows) sum(cputime) by join Above is simple example how to join two indexes. But how to join two indexes where the key value has two fields ? K.  
Hi All, This is the first time I encountered this. I have an HF which I have admin access to Splunk, from the server's backend. However, I can't seem to login to the its web portal using my LDAP cre... See more...
Hi All, This is the first time I encountered this. I have an HF which I have admin access to Splunk, from the server's backend. However, I can't seem to login to the its web portal using my LDAP credentials (authentication is via LDAP). And the former admins of this instance had already left without leaving any documentation or handed over any account we can use. Do you know how I can get around from the backend side in order for me to successfully login to the web portal eventually? I have viewed the passwd file but it is hashed so I'm not sure where to look and what to do with the limited access I have. I also tried creating an account using a command from the the bin folder (splunk add user), however it asks me to authenticate first before completing it. Any help is deeply appreciated!
I have a lookup table containing a list of regular expressions, and am trying see if there are matches against a field in one of my index.  I can't figure how to do it as it is not a direct comparis... See more...
I have a lookup table containing a list of regular expressions, and am trying see if there are matches against a field in one of my index.  I can't figure how to do it as it is not a direct comparison of values.  Appreciate any help on this.
Hello all, I am using steamstats with time_window=60m to calculate the moving average over the past hour.  However, when I set current=f i receive an error in the search log, "Error in 'streamstats'... See more...
Hello all, I am using steamstats with time_window=60m to calculate the moving average over the past hour.  However, when I set current=f i receive an error in the search log, "Error in 'streamstats' command:  Cannot set current to false when using a time window."  Is there a way to get around this?  Steam stats is exactly what i need to calculate the moving average, but I do not want to include the current event.  If there is not a way around this, is there another way to calculate the moving average and not including the current event?  Thanks
Hi All,  Created a drop down for index but when i added the token value in the panel query not working as expected when i select ALL option from the drop down. But when i select DEV_INDEX or SIT_IND... See more...
Hi All,  Created a drop down for index but when i added the token value in the panel query not working as expected when i select ALL option from the drop down. But when i select DEV_INDEX or SIT_INDEX its working fine. How to tweak the code to show up 2 indexes  data in the panel query when we select ALL option from the drop down??? <form version="1.1" theme="light"> <label>Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="Index" searchWhenChanged="true"> <label>Indexes</label> <choice value="dev_index, sit_index">All</choice> <choice value="dev_index">DEV_INDEX</choice> <choice value="sit_index">SIT_INDEX</choice> </input> </fieldset> <row> <panel> <table> <title>Total Count</title> <search> <query>index IN ("$index$") source=application.logs |stats count by codes</query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentageRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <form>  
Hello Can you help me Creating a dashboard that contains the following charts/data:                      Bookmarked content A chart (of tstat/ counts) of the content that was bookmarked. for the ... See more...
Hello Can you help me Creating a dashboard that contains the following charts/data:                      Bookmarked content A chart (of tstat/ counts) of the content that was bookmarked. for the past 7 days A chart with the names of the alerts/detections that were bookmarked for the past 30 days  Analso in this situation how to find your filed name in your splunk: bookmarked, bookmark I use both of them in my query but it still not working or we should use ''active'' please propose me a query.                  help me find the exact field name in order to create the exact query. Thank you. 
The print server OS is Windows Server 2019   I would like to get PrintService-Admin log to Splunk. I tried the following in the input.conf of Universal Forwarder in print server. [WinEventLog://M... See more...
The print server OS is Windows Server 2019   I would like to get PrintService-Admin log to Splunk. I tried the following in the input.conf of Universal Forwarder in print server. [WinEventLog://Microsoft-Windows-PrintService/Admin] disabled = 0 index = winps Which is found in https://community.splunk.com/t5/Getting-Data-In/Microsoft-Windows-PrintService-Operational-Logs/m-p/77633 But I cannot find any events from the index.   The log is enabled in the server, which is under Applications and Services Logs > Microsoft > Windows > PrintService   I also tried to set the data input from web console to monitor the log file in folder: C:\Windows\System32\winevt\Logs   With RegEx: Microsoft\-Windows\-PrintService.+\.evtx So i can get Microsoft-Windows-PrintService%4Admin.evtx AND Microsoft-Windows-PrintService%4Operational.evtx   But also, no event is shown for the index. Hope somebody can help with this. Thanks