All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Im currently working on ingesting 8 csv files from a path using inputs.conf on a UF. And the data is getting ingested . The issue is these 8 csv files are overwritten daily by new data by a aut... See more...
Hi, Im currently working on ingesting 8 csv files from a path using inputs.conf on a UF. And the data is getting ingested . The issue is these 8 csv files are overwritten daily by new data by a automation script so the data inside the csv file is changed daily.   I want to ingest the complete csv data daily into Splunk , but what I can see is only a small set of data is getting ingested but not the complete csv file data.   My inputs.conf is  [monitor://C:\file.csv] disabled = false sourcetype = xyz index = abcd crcSalt = <DATETIME>   Can someone please help me , whether Im using the correct input or not?   The ultimate requirement is to ingest the complete csv data from the 8 csv files daily into Splunk.   Thank you.
Hello! I am trying to collect 3 additional Windows Event logs and I have added them in the inputs.conf, for example   [WinEventLog://Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provid... See more...
Hello! I am trying to collect 3 additional Windows Event logs and I have added them in the inputs.conf, for example   [WinEventLog://Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Admin] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true    Admin, Autopilot, and Operational, were added the same way. I also added in props.conf   [WinEventLog:Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Admin] rename = wineventlog [WinEventLog:Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Autopilot] rename = wineventlog [WinEventLog:Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Operational] rename = wineventlog     The data are coming in, however, none of the fields are parsed as interesting fields. Is there something I am missing? I looked through some of the other conf file, but I think I am in over my head to make a new section in props? I thought the base [WinEventLog] would take care of the basic breaking up of interesting fields like EventID, so I am a bit lost.
How can I implement a post process search using the Dashboard Studio framework?  I can see that there is excellent documentation for doing this XML (Searches power dashboards and forms - Splunk Docu... See more...
How can I implement a post process search using the Dashboard Studio framework?  I can see that there is excellent documentation for doing this XML (Searches power dashboards and forms - Splunk Documentation), but I can't seem to find relevant information for how to do this in the markdown for Dashboard Studio. Note: I am not attempting to use a savedSearch.
I'm not very good with SPL. I currently have Linux application logs that show the IP address, user name, and if the user failed or had a successful login.  I'm interested in finding a successful log... See more...
I'm not very good with SPL. I currently have Linux application logs that show the IP address, user name, and if the user failed or had a successful login.  I'm interested in finding a successful login after one or more failed login attempts. I currently have the following search. The transaction command is necessary where it is or otherwise, all the events are split up into separate events of varying line counts. index=honeypot sourcetype=honeypotLogs | transaction sessionID | search "SSH2_MSG_USERAUTH_FAILURE" OR "SSH2_MSG_USERAUTH_SUCCESS"  Below is an example event. For clarity, I replaced details/omitted details from the logs below. [02] Tue 27Aug24 15:20:57 - (143323) Connected to 1.2.3.4 ... ... [30] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob [31] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_FAILURE ... [30] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob [02] Tue 27Aug24 15:20:57 - (143323) User "bob" logged in [31] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_SUCCESS: successful login Any tips on getting my search to find events like this? Currently I only have field extractions for the IP (1.2.3.4), user (bob), and sessionID (143323). I can possibly create a field extraction for the SSH2 messages but I don't know if that will help or not. Thanks!
i am trying to exclude non-business hours & weekends in my mstats query. Original Query: | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visually... See more...
i am trying to exclude non-business hours & weekends in my mstats query. Original Query: | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",sum(builtin:apps.web.actionCount.xhr.browser:parents) As "XHR_Count1",avg(builtin:apps.web.visuallyComplete.xhr.browser:parents) As "Avg_Xhr_Response1" where index=itsi_im_metrics AND source.name="DT_Prod_SaaS" AND entity.browser.name IN ("Desktop Browser","Mobile Browser") AND entity.application.name ="xxxxxx" earliest=-31d@d latest=@d-1m by entity.application.name | eval Avg_Load_Response1=round((Avg_Load_Response1/1000),2),Avg_Xhr_Response1=round((Avg_Xhr_Response1/1000),2),Load_Count1=round(Load_Count1,0),XHR_Count1=round(XHR_Count1,0) | table entity.application.name,Avg_Load_Response Modified this query like below and not getting any results | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",sum(builtin:apps.web.actionCount.xhr.browser:parents) As "XHR_Count1",avg(builtin:apps.web.visuallyComplete.xhr.browser:parents) As "Avg_Xhr_Response1" where index=itsi_im_metrics AND source.name="DT_Prod_SaaS" AND entity.browser.name IN ("Desktop Browser","Mobile Browser") AND entity.application.name ="xxxxx" earliest=-31d@d latest=@d-1m by entity.application.name | eval hour = tonumber(strftime(_time,"%H")) | eval dow = tonumber(strftime(_time,"%w")) | where hour>=6 AND hour<=18 AND dow!=0 AND dow!=6 | eval Avg_Load_Response1=round((Avg_Load_Response1/1000),2),Avg_Xhr_Response1=round((Avg_Xhr_Response1/1000),2),Load_Count1=round(Load_Count1,0),XHR_Count1=round(XHR_Count1,0) | table entity.application.name,Avg_Load_Response1 can anyone please help me to achieve this? Thanks in advance.
Here is my current query. I either get the Totals label in the last column or not at all. I need it to show in the first column at the beginning of the Totals row. Any help is greatly appreciated. Th... See more...
Here is my current query. I either get the Totals label in the last column or not at all. I need it to show in the first column at the beginning of the Totals row. Any help is greatly appreciated. Thanks.  index=etims_na sourcetype=etims_prod platformId=5 bank_fiid=CHUA | eval response_time=round(if(strftime(_time,"%Z") == "EDT",((j_timestamp-entry_timestamp)-14400000000)/1000000,((j_timestamp-entry_timestamp)-14400000000)/1000000-3600),3) | stats count AS Transactions count(eval(response_time <= 1)) AS "Good" count(eval(response_time <= 2)) AS "Fair" count(eval(response_time > 2)) AS "Unacceptable" avg(response_time) AS "Average" BY bank_fiid | eval "%Good"=(Good/Transactions)*100 | eval "%Fair"=(Fair/Transactions)*100 | eval "%Unacceptable"=(Unacceptable/Transactions)*100 | addinfo | eval "Report Date"=strftime(info_min_time, "%m/%Y") | table bank_fiid, "Transactions", "Good", "%Good" "Fair", "%Fair", "Unacceptable", "%Unacceptable", "Average", "Report Date" | rename bank_fiid as "Vision ID" | addcoltotals label=Total | append [|makeresults | eval "Vision ID"="Threshold" | eval "Good" = "response <= 1s" | eval "Fair" = "1s < response <= 3s" | eval "Unacceptable" = "3s < response"] | fields - _time
Hi Splunkers,   I'm trying to compare the policy names from Today with policy names from past 48 hours to see if there is any change in policy names. I tried using append as well as join to compare... See more...
Hi Splunkers,   I'm trying to compare the policy names from Today with policy names from past 48 hours to see if there is any change in policy names. I tried using append as well as join to compare the results from last 48 hours with Today's timeframe. But, I'm unable to get the expected output or result. Ex: In the below table I'm trying to see if there are any changes in policy names from last 48 hours. So, policy_3_sf's name is changed to policy_3_sk. Similarly, policy_4_sg and policy_5_gh names are changed to policy_4_sp and policy_5_gk respectively and are the new names I would like to list through my query as per the requirement. Last_48_Hours_Policy_Names Today_Policy_Names New_Policy_Names policy_1_xx policy_1_xx   policy_2_xs policy_2_xs   policy_3_sf policy_3_sk policy_3_sk policy_4_sg policy_4_sp policy_4_sp policy_5_gh policy_5_gk policy_5_gk Could you please let me know if my approach is correct or if something is missing in my queries? Thanks,
hi team, I wanted to create summary index using the following query. Daily Request counts Last Month   index=service_audit REQUEST | bucket span=d _time | eval time_diff=round(((stopDate - startD... See more...
hi team, I wanted to create summary index using the following query. Daily Request counts Last Month   index=service_audit REQUEST | bucket span=d _time | eval time_diff=round(((stopDate - startDate)/3600000),0) | stats count as Request_Count by _time     1.I followed all the steps mentioned in the splunk.com  2.i have created new summary index as name service_audit_summary  3.using collect command    index=service_audit REQUEST | bucket span=d _time | eval time_diff=round(((stopDate - startDate)/3600000),0) | stats count as Request_Count by _time | collect index=service_audit_summary   but the summary index not showing any event ?     4.even created report and tried but same problem I'm facing  please anyone could suggest  Thanks in advance 
I'm trying to update our Bitglass App on the Splunk Cloud (from 1.0.14 to 1.0.25) but there doesn't seem to be a a way to do this without uninstalling the app and reinstalling. If I select the app fr... See more...
I'm trying to update our Bitglass App on the Splunk Cloud (from 1.0.14 to 1.0.25) but there doesn't seem to be a a way to do this without uninstalling the app and reinstalling. If I select the app from SplunkBase, it only offers the option to download the app file. If I attempt to Upload the file, the system informs me that I already have the app installed. There's no upgrade / update option in the Apps management that I can find.  Does anyone have a suggestion of how to solve this?
Since switching to org.apache.httpcomponents.client5:httpclient5:5.3.1 (from org.apache.httpcomponents:httpclient:4.5.14) we have lost tracking of business transactions between tiers.  Are there a... See more...
Since switching to org.apache.httpcomponents.client5:httpclient5:5.3.1 (from org.apache.httpcomponents:httpclient:4.5.14) we have lost tracking of business transactions between tiers.  Are there any known issues with the Java Agents not supporting httpclient5 or is there an agent version where this is known to work? This is similar to https://community.appdynamics.com/t5/Idea-Exchange/App-Agent-supporting-Apache-httpclient-5/idi-p/43192 Thanks Steve
I need to collect data from a folder on a Windows machine, the problem is that this folder is mounted as a disk and the host sends data to it. The classic inputs.conf for the folder source does not w... See more...
I need to collect data from a folder on a Windows machine, the problem is that this folder is mounted as a disk and the host sends data to it. The classic inputs.conf for the folder source does not work. How can I fix this problem?
I have been looking into Smart Agents to simplify Agent management and our Applications run in Microsoft Windows. Looking at the documentation a smart agent installable is not available for Windows? ... See more...
I have been looking into Smart Agents to simplify Agent management and our Applications run in Microsoft Windows. Looking at the documentation a smart agent installable is not available for Windows? I am surprised given its widespread installed base.  Please correct me if I am wrong and would like to know more about this. Update : Please ignore the above there is an installation package available for Windows here, Smart Agent (appdynamics.com) Thanks.
I have a search which yields a time and correlated serial number for event A. I want to use this time and serial number to search for event B, event B must meet criteria X index="june_analytics_log... See more...
I have a search which yields a time and correlated serial number for event A. I want to use this time and serial number to search for event B, event B must meet criteria X index="june_analytics_logs_prod" message=* new_state: Diagnostic, old_state: Home* NOT message=*counts*| spath serial output=serial_number| table _time, serial_number| ```table command is just for readability```   Criteria X: Event B must occur within 30s immediately after event A Event B must have the same serial number as event A Event B message field must contain the phrase "placeholder 123" Any event that matches criteria X, I want to extract data from How can I use this data from event A to search for event B?    capture attached to show what current table looks like
I develop an app on a private Splunk Enterprise server and have a piece of code that accesses the rest api: # Use Splunk REST API to get all input parameters splunkd_uri = os.environ.get("SPLUNKD_U... See more...
I develop an app on a private Splunk Enterprise server and have a piece of code that accesses the rest api: # Use Splunk REST API to get all input parameters splunkd_uri = os.environ.get("SPLUNKD_URI", "https://127.0.0.1:8089") endpoint = f"{splunkd_uri}/servicesNS/nobody/{app_name}/data/inputs/{app_name}" headers = { 'Authorization': f'Splunk {session_key}' } response = requests.get(endpoint, headers=headers, verify=False, timeout=30) Everything works locally but when I run app-inspect before submitting to splunkcloud I get: FAILURE: If you are using requests.get to talk to your own infra with non-public PKI, make sure you bundle your own CA certs as part of your app and pass the path into requests.get as an arg. File: bin\utils\splunk_rest.py Line Number: 19 I am trying to understand how to solve this issue, because if I put a CA that matches the server I am working on, it will not satisfy the splunkcloud server that my clients will use. I think I am misunderstanding a core piece around how to utilize Rest API pragmatically. What is the correct way to go about this? Can it work both for Splunk enterprise and on Splunk Cloud? any clue or tip may help Thanks
We have a customer who does get IT mainframe insights today via CA SAS/MICS, a costly platform. They are seeking a solution for data consumption integrated with Splunk platform. We expect to ingest a... See more...
We have a customer who does get IT mainframe insights today via CA SAS/MICS, a costly platform. They are seeking a solution for data consumption integrated with Splunk platform. We expect to ingest and host SMF in IZPCA. What are customer options to consume reports in Splunk via IBM Z Performance and Capacity Analytics?
I would like to create a search with data models where my event id is 39. However, there is no datamodel that fulfills my criteria. Is there anyone kn
Hi Splunkers, I'm working on a React app for Splunk based on the documentation provided. https://splunkui.splunk.com/Packages/create/CreatingSplunkApps  I need to hide(may configure) the Splunk hea... See more...
Hi Splunkers, I'm working on a React app for Splunk based on the documentation provided. https://splunkui.splunk.com/Packages/create/CreatingSplunkApps  I need to hide(may configure) the Splunk header bar when the React app is in use.   I’ve come across several community posts suggesting that overriding the default view files might be necessary, but I’m unsure how to configure this within a React app. I’d appreciate any guidance on how to achieve this.
I installed cisco network add-on, but only main index work and I cannot store log in another index
According to Windows Export Certificate - Splunk Security Content it using macros in the first query  `certificateservices_lifecycle` EventCode=1007 | xmlkv UserData_Xml | stats count min(_time) a... See more...
According to Windows Export Certificate - Splunk Security Content it using macros in the first query  `certificateservices_lifecycle` EventCode=1007 | xmlkv UserData_Xml | stats count min(_time) as firstTime max(_time) as lastTime by Computer, SubjectName, UserData_Xml | rename Computer as dest | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_export_certificate_filter`   And in `certificateservices_lifecycle` macros is (source=XmlWinEventLog:Microsoft-Windows-CertificateServicesClient-Lifecycle-System/Operational OR source=XmlWinEventLog:Microsoft-Windows-CertificateServicesClient-Lifecycle-User/Operational) I only know in SPL we can't get result if write query with source in the first position, so i add index=* before `certificateservices_lifecycle` but unfortunately i don't get any result. Then i'm using metasearch for check it available or not with this query  First query : | metasearch index=* source IN ("XmlWinEventLog:Microsoft-Windows-CertificateServicesClient-Lifecycle-System/Operational") Second query : | metasearch index=* source IN ("XmlWinEventLog:Microsoft-Windows-CertificateServicesClient-Lifecycle-User/Operational")   and then i only got 0 result. The question is if i want to get data from source="XmlWinEventLog:Microsoft-Windows-CertificateServicesClient-Lifecycle-User/Operational" did i need setting it in Endpoints or can solved it in Splunk ?    Thanks
Hi, Good day! Just wanted to check your insights or any reference that you can share. We have Clustered Multisite Splunk environment and Splunk ES SHC and let's say correlation searches is triggere... See more...
Hi, Good day! Just wanted to check your insights or any reference that you can share. We have Clustered Multisite Splunk environment and Splunk ES SHC and let's say correlation searches is triggered, it will generate notable events. Is it possible that these notable events will be sent to another platform (example: Google Chronicle). If yes, can you share how it will be done? So let's say, the notable event is generated, it will be stored in Splunk locally and will also be forwarded to Google Chronicle SOAR. Is it possible?