All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @gcusello Yes tried, but that doesn't seems to be working.
| rex field=log "\<>\<>\<>\<>\|\|(?<temp>[^\|]+)\|\|\s(?<message>.+)"
Hi @Thulasinathan_M, did you tried to insert the token in the default tag ? <input type="text" token="your_text_token"> <label>Text label</label> <default>$your_previous_token$</def... See more...
Hi @Thulasinathan_M, did you tried to insert the token in the default tag ? <input type="text" token="your_text_token"> <label>Text label</label> <default>$your_previous_token$</default> <prefix>your_field="</prefix> <suffix>"</suffix> </input> Ciao. Giuseppe
Hi Splunk Experts. I've a table with multiple fields, based on a click I've created a token to get a value of it. I need to pass this token's value to a Textbox of an another panel. Is it Possible. ... See more...
Hi Splunk Experts. I've a table with multiple fields, based on a click I've created a token to get a value of it. I need to pass this token's value to a Textbox of an another panel. Is it Possible. Please advice!!
I tried to follow the answer and found that the data could not be transferred. The cause was that the client's certificate is not trusted for TLS communication. Do you know how to solve this?
@ITWhisperer @Will you able to provide the Rex for the below log format too. <><><><>||1407||
  Dataframe row : {"_c0":{"0":"deleted_count","1":"18","2":"8061","3":"0","4":"366619","5":"2","6":"1285","7":"2484","8":"1705","9":"1517","10":"12998","11":"13","12":"57","13":"0","14":"0","15":"0... See more...
  Dataframe row : {"_c0":{"0":"deleted_count","1":"18","2":"8061","3":"0","4":"366619","5":"2","6":"1285","7":"2484","8":"1705","9":"1517","10":"12998","11":"13","12":"57","13":"0","14":"0","15":"0","16":"0","17":"1315","18":"0","19":"0","20":"0","21":"0","22":"0","23":"410973","24":"18588725","25":"0","26":"0","27":"0","28":"0","29":"25238"},"_c1":{"0":"load_date","1":"2023-08-28","2":"2023-08-28","3":"2023-08-28","4":"2023-08-28","5":"2023-08-28","6":"2023-08-28","7":"2023-08-28","8":"2023-08-28","9":"2023-08-28","10":"2023-08-28","11":"2023-08-28","12":"2023-08-28","13":"2023-08-28","14":"2023-08-28","15":"2023-08-28","16":"2023-08-28","17":"2023-08-28","18":"2023-08-28","19":"2023-08-28","20":"2023-08-28","21":"2023-08-28","22":"2023-08-28","23":"2023-08-28","24":"2023-08-28","25":"2023-08-28","26":"2023-08-28","27":"2023-08-28","28":"2023-08-28","29":"2023-08-28"},"_c2":{"0":"redelivered_count","1":"0","2":"1","3":"0","4":"0","5":"0","6":"0","7":"204","8":"0","9":"0","10":"0","11":"0","12":"0","13":"0","14":"0","15":"0","16":"0","17":"0","18":"0","19":"0","20":"0","21":"0","22":"0","23":"0","24":"9293073","25":"0","26":"0","27":"0","28":"0","29":"0"},"_c3":{"0":"table_name","1":"pc_dwh_rdv.gdh_ls2lo_s99","2":"pc_dwh_rdv.gdh_spar_s99","3":"pc_dwh_rdv.cml_kons_s99","4":"pc_dwh_rdv.gdh_tf3tx_s99","5":"pc_dwh_rdv.gdh_wechsel_s99","6":"pc_dwh_rdv.gdh_revolvingcreditcard_s99","7":"pc_dwh_rdv.gdh_phd_s99","8":"pc_dwh_rdv.gdh_npk_s99","9":"pc_dwh_rdv.gdh_npk_s98","10":"pc_dwh_rdv.gdh_kontokorrent_s99","11":"pc_dwh_rdv.gdh_gds_s99","12":"pc_dwh_rdv.gdh_dszins_s99","13":"pc_dwh_rdv.gdh_cml_vdarl_le_ext_s99","14":"pc_dwh_rdv.gdh_cml_vdarl_s99","15":"pc_dwh_rdv.gdh_avale_s99","16":"pc_dwh_rdv.gdh_spar_festzi_s99","17":"pc_dwh_rdv_gdh_monat.gdh_phd_izr_monthly_s99","18":"pc_dwh_rdv.gdh_orig_sparbr_daily_s99","19":"pc_dwh_rdv.gdh_orig_terming_daily_s99","20":"pc_dwh_rdv.gdh_orig_kredite_daily_s99","21":"pc_dwh_rdv.gdh_orig_kksonst_daily_s99","22":"pc_dwh_rdv.gdh_orig_baufi_daily_s99","23":"pc_dwh_rdv_creditcard.credit_card_s99","24":"pc_dwh_rdv_csw.fkn_security_classification_s99","25":"pc_dwh_rdv_loan_appl.ccdb_loan_daily_s99","26":"pc_dwh_rdv_loan_appl.leon_loan_monthly_s99","27":"pc_dwh_rdv_loan_appl.nospk_loan_daily_s99","28":"pc_dwh_rdv_partnrdata.fkn_special_target_group_s99","29":"pc_dwh_rdv_talanx.insurance_s99"}}  
Hi @theprophet01, using a search like yours with Real-Time it isn't a good idea because you are using one CPU only for this search reducing the resources of your global Splunk infrastructure. It's ... See more...
Hi @theprophet01, using a search like yours with Real-Time it isn't a good idea because you are using one CPU only for this search reducing the resources of your global Splunk infrastructure. It's better to schedure a search e.g. every 5 minutes, so, when running is finished, the search releases the CPU for other jobs. In addition, your search could be optimized to reduce the execution time and the CPU use: | tstats max(_time) AS latest count BY host | eval recent= if(latest > relative_time(now(),"-5m"),1,0). realLatest = strftime(latest, "%Y-%M-%D %H%M%S") | where recent = 0 | rename host AS Host, realLatest AS "Latest Timestamp" | table Host, "Latest Timestamp" At least, using this search you find only the hosts that didn't send logs in the last 5 minutes, but that sent logs in the previous 10 minutes (using a timeframe of 15 minutes); if your host doesn't send logs for 15 minutes you loose this information. The best approach is having a lookup containing all the hosts to monitor (called e.g. perimeter.csv) containing at least one column (host) and running a search like the following: | tstats max(_time) AS latest count BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats max(_time) AS latest sum(count) AS total BY host | where total = 0 | rename host AS Host, realLatest AS "Latest Timestamp" | table Host, "Latest Timestamp" in this way you have to manage the lookup but you have a more affidable control. Ciao. Giuseppe
Hello, I have created a Splunk app and it is currently in marketplace. I am getting a timeout error while pulling data from my API into Splunk app. Upon investigation, I figured out that I need to ... See more...
Hello, I have created a Splunk app and it is currently in marketplace. I am getting a timeout error while pulling data from my API into Splunk app. Upon investigation, I figured out that I need to increase 'splunkdConnectionTimeout' from 30 sec to a higher value, in `$ SPLUNK_HOME /lib /python3.7 /site-packages /splunk /rest /__ init__. py’, line number 52. I want to modify this as and when the user installs my app, this modification should be applied upon restarting the splunk. I tried doing this by using `web. conf` file in my app but I am not sure where and how to use this. Please help me how can I do this.
Hi @yr, as I said, I don't know why, since some time Splunk changed approach using the same sourcetype for all WinEventLogs distinguishing them by source. I saw that you forced sourcetype in each i... See more...
Hi @yr, as I said, I don't know why, since some time Splunk changed approach using the same sourcetype for all WinEventLogs distinguishing them by source. I saw that you forced sourcetype in each inputs stanza, in this way you should be sure to have the sourcetype you want, in this way you shouldn't miss any log. I disagree with the last input stanza: Splunk logs are ingested in another input stanza and this is a duplication, in addition you forced sourcetype, in this way you're losing some internal monitoring features (e.g. Monitoring Console). Ciao. Giuseppe
Hello! Can you write a statue or instruction with examples about this and help us integrate a Splunk with Git? Begining from install Git, and finish to control scripts can easily be scripts to matc... See more...
Hello! Can you write a statue or instruction with examples about this and help us integrate a Splunk with Git? Begining from install Git, and finish to control scripts can easily be scripts to match scheduled or production rules with dedicated shell menus and so on. Thank you!
Hi @gcusello ,  Thank you very much for your assist. What you understand is correct, both of your query works perfectly fine as expected.
Hi,   I tried the follow the link https://splunkonbigdata.com/failed-to-start-kv-store-process-see-mongod-log-and-splunkd-log-for-details/ , surprisingly after restart everything back to normal.
Hi I need to do a dashboard in dashboard studio. I already configured some rules and if it triggers any I need to paint it on the dashboard, like this...  
Sure you can do that - you can either populate the dropdown with static options with the month name and add the .csv on the end for the value, e.g. <input type="dropdown" token="month"> <l... See more...
Sure you can do that - you can either populate the dropdown with static options with the month name and add the .csv on the end for the value, e.g. <input type="dropdown" token="month"> <label>Month</label> <choice value="july">July</choice> <choice value="august">August</choice> ... more choices </input> then your search is | inputlookup $month$.csv ... or you could make your lookup dynamic and look for lookups that match a pattern, e.g. <input type="dropdown" token="month"> <label>Month</label> <search> <query> | rest splunk_server=local /servicesNS/-/-/data/lookup-table-files | where 'eai:acl.app'="your_app_name" | fields title | where match(title, "^(january|february|march|april|may|june|july|august|september|october|november|december).csv$") | eval month=replace(title, "\.csv", ""), month=upper(substr(month, 1,1)).substr(month, 2) </query> </search> <fieldForLabel>month</fieldForLabel> <fieldForValue>title</fieldForValue> </input>  
Hi agreed but why source and sourcetype os mixed up ? it does not goes what i have mentioned in inputs.conf. how do i fix it ?   DC01.xxx.xxx</Computer><Security/></System><EventData><Data Name='... See more...
Hi agreed but why source and sourcetype os mixed up ? it does not goes what i have mentioned in inputs.conf. how do i fix it ?   DC01.xxx.xxx</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>CORP\ADmaint</Data><Data Name='SubjectUserName'>ADmaint</Data><Data Name='SubjectDomainName'>CORP</Data><Data Name='SubjectLogonId'>0x1b73fc</Data><Data Name='PrivilegeList'>SeSecurityPrivilege SeBackupPrivilege SeRestorePrivilege SeTakeOwnershipPrivilege SeDebugPrivilege host = DC01 source = WinEventLog:Security sourcetype = WinEventLog this source and sourcetype are mixed and not according to inputs.conf  
i had similar issue.   i created new index for my windows servers and define the sourcetype in inputs.conf and deploy the _TA_Windows apps search works fine but source type and source are interchange... See more...
i had similar issue.   i created new index for my windows servers and define the sourcetype in inputs.conf and deploy the _TA_Windows apps search works fine but source type and source are interchanged. any thoughts ?  
I got it working again. I had to recopy the coldtofrozenexample.py and edit for my environment. I was missing some sections from the script.
I like this query, but I have indices with long names that incorporate underscores "_" and the split command is not working in this scenario. Quotations did not work, but astericks did.  I do not wan... See more...
I like this query, but I have indices with long names that incorporate underscores "_" and the split command is not working in this scenario. Quotations did not work, but astericks did.  I do not want to use asterisks as I will be generating alerts and do not want extra characters in the message.  Please let me know how to use split with this naming convention for indices.  | metasearch index=AB_123_CDE OR index=CD_345_EFG OR index=EF_678_HIJ | stats count by index | append [ noop | stats count | eval index=split("AB_123_CDE;CD_345_EFG;EF_678_HIJ",";") | mvexpand index ] | stats max(count) as count by index | where count = 0
I am sorry, but this sounds like a bad excuse for not thinking this through. I have never seen that it was  popular, recommended or even supported to install the forwarder with the server. If you hav... See more...
I am sorry, but this sounds like a bad excuse for not thinking this through. I have never seen that it was  popular, recommended or even supported to install the forwarder with the server. If you have any good links on this then please supply. If the docker people wants this, then create a solution for them, and leave the rest of us alone.  Imagine all the automation (puppet, ansible, self coded and so on) that now have to be changed. Monitoring of the user and service needs to be changed.  There must be a ton of code/checks/monitoring that needs to be changed.   In regards to when this change was implemented, i did a quick install test (wiped each time):  rpm -i splunkforwarder-7.3.0-657388c7a488-linux-2.6-x86_64.rpm - owner & group = splunk rpm -i splunkforwarder-8.0.4-767223ac207f-linux-2.6-x86_64.rpm - owner & group = splunk rpm -i splunkforwarder-8.2.6-a6fe1ee8894b-linux-2.6-x86_64.rpm - owner & group = splunk rpm -i splunkforwarder-9.0.0-6818ac46f2ec-linux-2.6-x86_64.rpm - owner & group = splunk rpm -i splunkforwarder-9.0.5-e9494146ae5c.x86_64.rpm - owner & group = splunk rpm -i splunkforwarder-9.1.0.1-77f73c9edb85.x86_64.rpm - owner & group = splunkfwd and just to verify i upgraded from 9.0.5 to 9.1.0.1 and yes the owner changed from splunk to splunkfwd. So be careful out there. To be fair support said that this should be fixed in coming 9.1.1 - retaining the previous user. Even the documentation uses "splunk" as the owner all the way from version 9.0 to 9.0.5 https://docs.splunk.com/Documentation/Forwarder/9.0.5/Forwarder/Installanixuniversalforwarder So i simply don't buy the excuse. Now if we are installing the 9.1.0.1 and wants to keep using "splunk" as the owner, we will have to manually , make the install, create "splunk" user, update the unit file, chown SPLUNK_HOME to splunk, update SPLUNK_OS_USER=splunk  in splunk-launch.conf and then delete "splunkfwd", According to support. Just why.  That said, good or bad reason, it does not change the fact that this is done out of the blue with no prior warning. Same happened with the change from initd/systemd and when you changed the service name.  Sorry for the rant, it just makes me annoyed that this should have been handled completely different imo.