All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunk community in a nutshell my problem is i have set up splunk and a forwarder on a server, added input and output rules respectively. however I am receiving no data from the forwarders to ... See more...
Hello Splunk community in a nutshell my problem is i have set up splunk and a forwarder on a server, added input and output rules respectively. however I am receiving no data from the forwarders to my splunk dashboard. I am very new to the info sec world and I am following a tutorial on bluecapesecurity.com for setting up a medium home lab. I have a windows 19 server and enterprise client installed. I would love any input on possible solutions. I am sure its going to be something simple or a single setting I missed. the input.conf file is  # All Windows Event logs [monitor://C:\Windows\System32\Winevt\Logs\*.evtx] disabled = false index=winevtx the input.conf file is saved in the: C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local I have set up inbound and outbound rules for letting anything from the splunk program through as well as opened the port 9997
Hello, My Splunk query returns the marks of students in the below format.   User                Subject                 Grade John                Physics                 D                       ... See more...
Hello, My Splunk query returns the marks of students in the below format.   User                Subject                 Grade John                Physics                 D                           Science                A                           Math                      B                           Social                    C                           History                 D Mark                Physics               A                           Social                   B                           History                 C Sam                 Math                     C                           Social                   D                           History                A   How can I filter the query to show only marks for Physics and Social? Somewhat like the below. User                Subject              Grade John                Physics               D                           Social                   C Mark                Physics               A                           Social                   B Sam                 Social                  D   Thank you!  
I have a Splunk table that has 3 rows and a count for each row. How do I make each value in table go to a different URL.  This is what I have but every row I click goes to that link. I want each tabl... See more...
I have a Splunk table that has 3 rows and a count for each row. How do I make each value in table go to a different URL.  This is what I have but every row I click goes to that link. I want each table to go to a different link.     "type": "splunk.table",     "dataSources": {         "primary": "ds_5ds4f5"     },     "title": "Device Inventory",     "eventHandlers": [         {             "type": "drilldown.customUrl",             "options": {                 "url": "https://device.com",                 "newTab": true             }         }     ],
What is it and how does it work? I've got it installed but there is no documentation that I can find... 
Hi Community, We have the "Splunk Add-on for Microsoft Office 365" installed.  We've created "Inputs" for "Audit.AzureActiveDirectory", "Audit.Exchange","Audit.SharePoint". As a result, we are gett... See more...
Hi Community, We have the "Splunk Add-on for Microsoft Office 365" installed.  We've created "Inputs" for "Audit.AzureActiveDirectory", "Audit.Exchange","Audit.SharePoint". As a result, we are getting all the Azure, Exchange, and SharePoint Azure audit log events loaded into Splunk! Perfect! Now we want to add the "Teams" audit log events also.   But we don't see an "Audit.Teams" entry in the "Content Type" picklist on the "Add Management Activity" screen.  We only see the entries listed above. The only option we see relative to Teams is on the "Create New Input" list and that only loads aggregate Usage Report data on calliong.  Unfortunately, that is useless for us. Has anyone figured out how to load/ingest all the Teams related Azure Audit Log events like the above AzureAD, Exchange, SharePoint events are loaded? Thanks in advance for any advice!!
So I have the fields that I want to subtract.  One is SequenceNumber_Comment (ex 211) and SequenceNumber_Withdrawal (ex 210). I want to subtract the values and put them in the variable match. Below i... See more...
So I have the fields that I want to subtract.  One is SequenceNumber_Comment (ex 211) and SequenceNumber_Withdrawal (ex 210). I want to subtract the values and put them in the variable match. Below is the SPLI have but I get an empty value. | eval match = tonumber(SequenceNumber_Comment) - tonumber(SequenceNumber_Withdrawal)   What do I do???  Thank you!!!
I want to display total transactions without where condition in result with other fields which has specific where condition, for.eg  | eval totalResponseTime=round(requestTimeinSec*1000), | conve... See more...
I want to display total transactions without where condition in result with other fields which has specific where condition, for.eg  | eval totalResponseTime=round(requestTimeinSec*1000), | convert num("requestTimeinSec") | rangemap field="totalResponseTime" "totalResponseTime"=0-3000 | rename range as RangetotalResponseTime | eval totalResponseTimeabv3sec=round(requestTimeinSec*1000) | rangemap field="totalResponseTimeabv3sec" "totalResponseTimeabv3sec"=3001-60000 | rename range as RangetotalResponseTimeabv3sec | eval Product=case( (like(proxyUri,"URI1") AND like(methodName,"POST"))OR (like(proxyUri,"URI2") AND like(methodName,"GET"))OR (like(proxyUri,"URI3") AND like(methodName,"GET")),"ABC") | bin span=5m _time | stats count(totalResponseTime) as TotalTrans count(eval(RangetotalResponseTime="totalResponseTime")) as TS<3S count(eval(RangetotalResponseTimeabv3sec="totalResponseTimeabv3sec")) as TS>3SS by Product URI methodName _time | eval TS<XS=case( Product="ABC",'TS<3S') | eval TS>3S = 'TotalTrans'-'TS<XS' | eval SLI=case(Product="ABC",round('TS<3S'/TotalTrans*100,4)) | rename methodName AS Method | where (Product="ABC") and (SLI<99) | stats sum(TS>3S) As AvgImpact count(URI) as DataOutage by Product URI Method | fields Product URI Method TotalTrans SLI AvgImpact DataOutage | sort Product URI Method
Hi, complete Splunk beginner here, so sorry it this is a stupid question. I'm trying to chart some data that I'm pulling from an MQTT broker. The Splunk  MQTT Modular Input app is doing its thing a... See more...
Hi, complete Splunk beginner here, so sorry it this is a stupid question. I'm trying to chart some data that I'm pulling from an MQTT broker. The Splunk  MQTT Modular Input app is doing its thing and data is arriving every 5 minutes. Using the most basic query  (  source="mqtt://MeteoMQTT"  ) gives these results:   Fri Jul 26 15:24:46 BST 2024 name=mqtt_msg_received event_id= topic=meteobridge msg={"meteoTemp":17.9,"meteoHumidity":64,"meteoRainlasthour":0,"meteoWindSpeed":6.04,"meteoWindDirection":"SW","meteolunarPercent":67.3}   What I really want to do though is to break out the values from the most recent data poll into separate "elements" that can then be added to a dashboard. I tried using the spath command: source="mqtt://MeteoMQTT" | spath output=meteoTemp path=meteoTemp But that just returned the whole object again. So, how can i parse out the different values (meteoTemp, meteoHumidity, meteoRainlasthour, etc), so that i can add their most recent values as individual dashboard elements please? TIA.
Hi All, So I have a lookup table with the following fields: FQDN, Hostname, and IP. I need to check to see which of these assets in the lookup table are logging (about 700 assets) and which aren't... See more...
Hi All, So I have a lookup table with the following fields: FQDN, Hostname, and IP. I need to check to see which of these assets in the lookup table are logging (about 700 assets) and which aren't in the last 7 days. I used the following basic SPL to get a list of hosts which are logging:   | tstats earliest(_time) latest(_time) count where index=* earliest=-7d by host   The issue I'm having is that the host output in the above SPL comes through in different formats, it may be a FQDN or a Hostname, or an IP address. How do I use my lookup table to check if the assets in the lookup table are logging without having to do 3 joins on FQDN, Hostname and IP? Here was a SPL query that somewhat worked but it is too inefficient:   | inputlookup lookup.csv | eval FQDN=lower(FQDN) | eval Hostname=lower(Hostname) | join type=left FQDN [ |tstats latest(_time) as lastTime where index=* earliest=-7d by host | rename host as FQDN | eval FQDN=lower(FQDN) | eval Days_Since_Last_Log = round((now() - lastTime) / 86400) | convert ctime(lastTime) ] | join type=left Hostname [ |tstats latest(_time) as lastTime where index=* earliest=-7d by host | rename host as Hostname | eval Hostname=lower(Hostname) | eval Days_Since_Last_Log = round((now() - lastTime) / 86400) | convert ctime(lastTime) ] | join type=left IP[ |tstats latest(_time) as lastTime where index=* earliest=-7d by host | rename host as IP | eval IP=lower(IP) | eval Days_Since_Last_Log = round((now() - lastTime) / 86400) | convert ctime(lastTime) ] | rename lastTime as LastTime | fillnull value="NULL" | table FQDN, Hostname, IP, Serial, LastTime, Days_Since_Last_Log   I'm somewhat new to Splunk so thank you for the help!
So the premise is that I constructed two dashboards: dashboard A as an overview and dashboard B as details. Then, on dashboard A, I configured one of the displays to have an on-click trigger that con... See more...
So the premise is that I constructed two dashboards: dashboard A as an overview and dashboard B as details. Then, on dashboard A, I configured one of the displays to have an on-click trigger that connects to dashboard B. However, the global time condition on dashboard A cannot be connected to dashboard B.   is it possible to make the time dynamic on dashboard B?
Hello ,  Am I eligeable for an other 60days free trial splunk enterprise  with the same splunk email accompt  ? I tried to install splunk enterprise in an other PC ( with the same .msi  exec  used t... See more...
Hello ,  Am I eligeable for an other 60days free trial splunk enterprise  with the same splunk email accompt  ? I tried to install splunk enterprise in an other PC ( with the same .msi  exec  used the first time, then I  tried a new one ) but  the installation failed ( error  splunk enterprise wizard setup end prematurely) . My free trial  goes  until 11 August 2024 , I did'nt uninstall splunk enterprise on the first PC Im confuse, does any one can help me  
My org has millions of events coming in through firewalls. I had a 24 hour timeframe search take 12.5 hours to run.  I was curious if I broke it up into 6 hour timeframes (changing the earliest/l... See more...
My org has millions of events coming in through firewalls. I had a 24 hour timeframe search take 12.5 hours to run.  I was curious if I broke it up into 6 hour timeframes (changing the earliest/latest statements accordingly), and having them outputlookup to the same lookup file. I would then inputlookup the file and tailor enrich accordingly, however I want to reset after each day. ie. I do not want the file to keep growing. Would I set append=false on query1, and append=true for query2, query3, and query4? 
We want to monitor and poll data from REST APIs and index the responses in Splunk. We know that could be achieved by the Splunk base provided app REST API Modular Input but since it's a developer su... See more...
We want to monitor and poll data from REST APIs and index the responses in Splunk. We know that could be achieved by the Splunk base provided app REST API Modular Input but since it's a developer supported paid service application, we wanted to know is there any other alternative way to monitor the same in Splunk enterprise. A quick response is highly appreciated. 
I am working on the below query in which I want to calculate the lead_time in HH:SS. This query is giving me some results in statical mode but not giving any results with linechart. Please help me fi... See more...
I am working on the below query in which I want to calculate the lead_time in HH:SS. This query is giving me some results in statical mode but not giving any results with linechart. Please help me fix it. Results with statical mode. No results showing while using "line chart" Below is the complete query: index= abc | eval completion_time=strptime(COMPLETED_TIMESTAMP, "%Y-%m-%dT%H:%M:%S.%3QZ") | stats count by completion_time FULFILLMENT_START_TIMESTAMP _time | eval lead_time = (completion_time - FULFILLMENT_START_TIMESTAMP) | eval hours=floor(lead_time / 3600) | eval minutes=floor((lead_time % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval HH_MM = hours . ":" . formatted_minutes | timechart max(HH_MM) as "Maximum" avg(HH_MM) as "Average" min(HH_MM) as "Minimum" | eval hours=floor(Maximum / 3600) | eval minutes=floor((Maximum % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval max_HH_MM = hours . ":" . formatted_minutes | eval hours=floor(Average / 3600) | eval minutes=floor((Average % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval avg_HH_MM = hours . ":" . formatted_minutes | eval hours=floor(Minimum / 3600) | eval minutes=floor((Minimum % 3600) / 60) | eval formatted_minutes=if(minutes < 10, "0" . minutes, minutes) | eval min_HH_MM = hours . ":" . formatted_minutes | table _time max_HH_MM avg_HH_MM min_HH_MM
Hi,  Has anyone used the  "ServiceNow Security Operations Event Ingestion Addon for Splunk ES" or the "ServiceNow Security Operations Addon" app to configure OAuth2 ? If yes, how do you set the use... See more...
Hi,  Has anyone used the  "ServiceNow Security Operations Event Ingestion Addon for Splunk ES" or the "ServiceNow Security Operations Addon" app to configure OAuth2 ? If yes, how do you set the user in the "created by" field in ServiceNow? It seems to be automatically set to the user who configured the OAuth2 connection. With basic auth it is simple because you decide which user connects to ServiceNow, but with OAuth2 it is just a clientID and secret but there is no user field and it seems a user is being sent alongside the event by Splunk.
i have json data but all the data getting in single event not parsing properly each event  here is adding the event data. Please help what should i do to achieve in standard format in splunk this i... See more...
i have json data but all the data getting in single event not parsing properly each event  here is adding the event data. Please help what should i do to achieve in standard format in splunk this is in splunk cloud {"date_extract_linux":"2024-07-26 08:44:23.398743330","database": {"script_version":"1.0","global_parameters": {"check_name":"General_parameters","check_status":"OK","check_error":"","script_version":"1.0","host_name":"flosclnrhv03.pharma.aventis.com","database_name":"C2N48617","instance_name":"C2N48617","database_version":"19.0.0.0.0","database_major_version":"19","database_minor_version":"0"}, "queue_mem_check": {"check_name":"queue_mem_check","check_status":"OK","check_error":"","queue_owner":"LIVE2459_VAL","queue_name":"AQ$_Q_TASKREPORTWORKTASK_TAB_E","queue_sharable_mem":"4072"}, "queue_mem_check":  {"check_name":"queue_mem_check","check_status":"OK","check_error":"","queue_owner":"SYS","queue_name":"AQ$_ALERT_QT_E","queue_sharable_mem":"0"}, "fra_check": {"check_name":"fra_check","check_status":"OK","check_error":"","flash_in_gb":"40","flash_used_in_gb":".62","flash_reclaimable_gb":"0","percent_of_space_used":"1.56"}, "processes": {"check_name":"processes","check_status":"OK","check_error":"","process_percent":"27.3","process_current_value":"273","process_limit":"1000"}, "sessions": {"check_name":"sessions","check_status":"OK","check_error":"","sessions_percent":"16.41","sessions_current_value":"252","sessions_limit":"1536"}, "cdb_tbs_check": {"check_name":"cdb_tbs_check","check_status":"OK","check_error":"","tablespace_name":"SYSTEM","total_physical_all_mb":"65536","current_use_mb":"1355","percent_used":"2"}, "cdb_tbs_check": {"check_name":"cdb_tbs_check","check_status":"OK","check_error":"","tablespace_name":"SYSAUX","total_physical_all_mb":"65536","current_use_mb":"23606","percent_used":"36"}, "cdb_tbs_check": {"check_name":"cdb_tbs_check","check_status":"OK","check_error":"","tablespace_name":"UNDOTBS1","total_physical_all_mb":"65536","current_use_mb":"26","percent_used":"0"}, "cdb_tbs_check":  {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN2467","tablespace_name":"SYSAUX","total_physical_all_mb":"65536","current_use_mb":"627","percent_used":"1"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1S48633","tablespace_name":"SYSTEM","total_physical_all_mb":"65536","current_use_mb":"784","percent_used":"1"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN8944","tablespace_name":"SYSAUX","total_physical_all_mb":"65536","current_use_mb":"1546","percent_used":"2"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1S48633","tablespace_name":"USERS","total_physical_all_mb":"65536","current_use_mb":"1149","percent_used":"2"}, "pdb_tbs_check":  {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN8944","tablespace_name":"SYSTEM","total_physical_all_mb":"65536","current_use_mb":"705","percent_used":"1"}, "pdb_tbs_check": {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1NN8944","tablespace_name":"INDX","total_physical_all_mb":"32767","current_use_mb":"378","percent_used":"1"}, "pdb_tbs_check":  {"check_name":"pdb_tbs_check","check_status":"OK","check_error":"","pdb_name":"O1S48633","tablespace_name":"USRINDEX","total_physical_all_mb":"65536","current_use_mb":"128","percent_used":"0"}, } } Collapse
I need help with assigning permissions in Splunk. 1. There is an user who needs to edit their dashboards and alerts in Splunk. This user has two applications dashboards and alerts that they need acc... See more...
I need help with assigning permissions in Splunk. 1. There is an user who needs to edit their dashboards and alerts in Splunk. This user has two applications dashboards and alerts that they need access to. I want to ensure that the user has the minimum permissions necessary to edit only those two dashboards and alerts. 2.  A user in our system has created an alert and wants to integrate it with ServiceNow. However, when attempting to select an account name in the integration settings, the user is unable to select an account name. So what all minimum permission is required for user. 
Hello Splunkers I have a dropdown that calculates week_start for the last whole year. Then it has to pick "last_week" as default. I noticed that the dropdown, instead of remembering the label, add... See more...
Hello Splunkers I have a dropdown that calculates week_start for the last whole year. Then it has to pick "last_week" as default. I noticed that the dropdown, instead of remembering the label, adds the value to <default></default>  I've tried to calculate last_week as a token and added to <default></default>, which it picks up correctly.  But shows the epoch time in the dropdown instead of selecting the corresponding label "Last Week". Code for defining the dropdown search and initialising the token $last_week$: <fieldset submitButton="false"> <input type="dropdown" token="week"> <label>week</label> <fieldForLabel>time</fieldForLabel> <fieldForValue>start_time</fieldForValue> <search> <query>| makeresults count=52 | fields - _time | streamstats count | eval count=count-1 | eval start_time = relative_time(now(),"-".count."w@w+1d") | eval time = case(count==1, "Last week", count==0, "Current week", 1==1, strftime(start_time,"%a %d-%b-%Y")) | table time, start_time | eval start_time=round(start_time,0)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <default>$last_week$</default> <initialValue>$last_week$</initialValue> </input> </fieldset> -- The token initialisation that calculates last week wrt now(): <init> <eval token="last_week">relative_time(now(),"-1w@w+1d")</eval> </init>
thanks to read and have a good day 1. we makes lookup file to use CSV File (in test enviroment), after then copy&paste in real server but SPL doesn't recognized the lookup file, even i double ch... See more...
thanks to read and have a good day 1. we makes lookup file to use CSV File (in test enviroment), after then copy&paste in real server but SPL doesn't recognized the lookup file, even i double check up the true location in apps "/opt/splunk/etc/apps (my apps you know..)/lookups "  am i need to more check something ?  or i need to makes new one CSV  and also file's owner has diffrent in a new version does it any relastion with chmod in linux? 1.  기존 개발 환경에서 만들어둔 CSV 파일을 복사 붙여넣기를 해서 다른 환경으로 옮겼는데 lookup 파일을 인식하지 못하는거 같습니다. 경로는 더블체크 해봤으나 틀린점이없었습니다. 혹시 확인해야될 사항이 더있을까요  기존 사항에서는 owner 부분이 admin 으로 되어있던데 이게 파일권한 chmod와 상관이 있을까요   2. i cant fully understand about how's working on lookup's range i mean if we make or apply to lookups file in "A" Apps. then "B" Apps can also use that lookup file? i dont understand what's meaning to grouping file in each Apps and lookup files 2. lookup 파일의 적용 범위에 대해 궁금합니다. 관리자페이지에 lookup 파일을 넣고 모든 permission을 all로 지정했을때 다른 사용자 페이지에서 lookup을 시행할 경우 관리자페이지에 들어가 있는 lookup table을 참조해서 가져오는게 맞는걸까요? 
Hello, we receive data using _TCP_ROUTING from forwarders from another team using another Splunk cluster. We don't use same indexes. Instead of routing data based on source or host we receive on ou... See more...
Hello, we receive data using _TCP_ROUTING from forwarders from another team using another Splunk cluster. We don't use same indexes. Instead of routing data based on source or host we receive on our indexers, is it possible to route data from one index (specified in their inputs.conf) to our own index? Especially what would be the props.conf stanza? Thanks.