All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, What are the best practices for configuring Splunk memory and swap partition space? now resources is: The resources of the three index nodes are 24C, 64GB, 2T, and SSD with a 10 gigabit transm... See more...
Hello, What are the best practices for configuring Splunk memory and swap partition space? now resources is: The resources of the three index nodes are 24C, 64GB, 2T, and SSD with a 10 gigabit transmission rate. Each index node has 64GB of physical memory, and SWAP has 8GB. SWAP strategy requires physical memory to exceed 70% before it can be used. The current situation is that only 1.6GB of physical memory is used, but the swap uses 3.8GB. The following is the alarm information. [Alarm Name] system.swap.used_pct [Warning content] The usage rate of swap partition has reached 39.76%,. and the AVG has exceeded the threshold of 20.0% in the past minute. I have some questions to ask: 1. Why is swap usage so much higher than memory. 2. How to configure memory and swap partition space, and what are the best practices?
In dashboard studio there seems to be no limit and no next-button. Also the pdf shows all the rows so that's a way to solve it.
Hello @shub_loginsoft , This seems to be issue with Browser Cookies, You can try by clearing browser data or for temporary time use in Incognito mode. Please let me know if this works!
Hi, @ITWhisperer  The events are like below.     { "MetaData": { "JENKINS_URL": "https://abc.com", "stagename": "ABC_CT", "variantname": "NEW_ABC", "jobname": ... See more...
Hi, @ITWhisperer  The events are like below.     { "MetaData": { "JENKINS_URL": "https://abc.com", "stagename": "ABC_CT", "variantname": "NEW_ABC", "jobname": "abc", "buildnumber": 29, "filename": "1729005933566.json" }, "suite": { "hostname": "localhost", "failures": 0, "package": "ABC", "tests": 0, "name": "ABC_test", "id": 0, "time": 0, "errors": 0, "case": [ { "classname": "xyz", "name": "foo1", "time": 0, "status": "Passed" }, { "classname": "pqr", "name": "foo2", "time": 0, "status": "Passed" }, ........ ] } }     there will be so many events like this for a single project and the values will be repeated in those events. Like suite, case will be repeated.   index=... sourcetype=... |spath ... | stats count(eval(Status="Execution Failed" OR Status="case_Failed")) AS Failed_cases, count(eval(Status="Passed")) AS Passed_cases, count(eval(Status="Failed" OR Status="case_Error")) AS Execution_Failed_cases, dc(case) as Total_cases dc(suite) as "Total suite" by Job_Name Build_Variant Jenkins_Server   I use spath to get every parameter, then i use them in the query.
Hi @inventsekar  I got this error when using the send email command, that's probably because I am not an admin error: command="sendemail", 'rootCAPath' while sending mail to: Thanks
Hi @inventsekar , If I uncheck open in new tab, it will not open a new tab and use the current tab. My goal is to open a new tab, but only one, not two.   Thank you  
Hi @ITWhisperer  Here's the code:    Thank you so much { "type": "splunk.singlevalueicon", "options": { "showValue": false, "icon": "splunk-enterprise-kvstore://12345abcdefg... See more...
Hi @ITWhisperer  Here's the code:    Thank you so much { "type": "splunk.singlevalueicon", "options": { "showValue": false, "icon": "splunk-enterprise-kvstore://12345abcdefg" }, "eventHandlers": [ { "type": "drilldown.customUrl", "options": { "url": "/splunk/app/test_app/second_dashboard?form.student_token=$student_token$", "newTab": true } } ], "context": {}, "showProgressBar": false, "showLastUpdated": false }    
This is regarding the integration between Splunk and Google Workspace. I have followed the documentation below to configure the integration, but the log data is not being ingested into the specifi... See more...
This is regarding the integration between Splunk and Google Workspace. I have followed the documentation below to configure the integration, but the log data is not being ingested into the specified index in Splunk, and I cannot view the Google Workspace logs on Splunk. Additionally, there are no apparent errors after the integration setup. I would appreciate any advice or precautions to take when installing the Add-on for Google Workspace. # Additional info Upon checking the log files, the following errors were found. However, no 40x errors were found. Could not refresh service account credentials because of ('unauthorized_client: Client is unauthorized to retrieve access tokens using this method, or client not authorized for any of the scopes requested.', {'error': 'unauthorized_client', 'error_description': 'Client is unauthorized to retrieve access tokens using this method, or client not authorized for any of the scopes requested.'}) # Referenced Documentation ## Installation of the Add-on for Google Workspace https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Installation ## Issuing Authentication Keys for Accounts Created on the Add-on for Google Workspace https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Configureinputs1 -> Refer to the "Google Workspace activity report prerequisites" section in the above document. ## Add-on Configuration https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Configureinputs2 -> Refer to the "Add your Google Workspace account information" and "Configure activity report data collection using Splunk Web" sections in the above document. ## Troubleshooting https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Troubleshoot -> Refer to the "No events appearing in the Splunk platform" section in the above document. https://community.splunk.com/t5/Getting-Data-In/Why-is-Splunk-Add-on-for-Google-Workspace-inputs-getting-401/m-p/602874
Hello, we urgently need to obtain a Splunk local disaster recovery solution and hope to receive a best practice explanation. The existing Splunk consists of 3 search heads, 1 deployer, 1 master node, ... See more...
Hello, we urgently need to obtain a Splunk local disaster recovery solution and hope to receive a best practice explanation. The existing Splunk consists of 3 search heads, 1 deployer, 1 master node, 1 DMC, 3 indexes, and 2 heavy forwarders. In this architecture, the search replication factors are all 2 and there is stock data available. The demand for local disaster recovery is: The host room where the existing data center's Xinchuang SIEM system is located has been shut down, and the data in the disaster recovery room can be queried normally. The closure of the newly built disaster recovery host room will not affect the use of the existing data center's SIEM system. RPO 0 cannot lose data, RTO can recover within 6 hours.
Hi @Neekheal  If the text is literal and same for all logs, then you can include the direct lines inside the rex.  Lets say "CSVSentinfo:L00Show your passport" is a "constant" in all logs, then you... See more...
Hi @Neekheal  If the text is literal and same for all logs, then you can include the direct lines inside the rex.  Lets say "CSVSentinfo:L00Show your passport" is a "constant" in all logs, then you keep it as part of rex command: "(?<PC_sTime>\d{12})CSVSentinfo\:L00Show your passport.*(?P<Field2>rex cmd)" to match newline and/or tab characters, pls include "\n" "\t"  
I am from Japan. Sorry for my poor English and lack of knowledge about Splunk. I received a Splunk Enterprise Trial License and would like to import Palo Alto logs and issue alerts (via email, etc.)... See more...
I am from Japan. Sorry for my poor English and lack of knowledge about Splunk. I received a Splunk Enterprise Trial License and would like to import Palo Alto logs and issue alerts (via email, etc.), but I am not sure how to do this (manually importing past logs succeeded). I wonder if past logs can issue alert. About our environment, I set up all-in-one virtual server in our FJ Cloud (Fujitsu Cloud)is one virtual server and Splunk is running here. There are no forwarders installed on other servers. I would be more than happy if you could let me know. Thank you for your support.
What should be the rex command to skip new lines ,characters or numbers and special characters and then to search and extract  "(?<PC_sTime>\d{12})CSVSentinfo:L00Show your passport*"  
Hi @Neekheal all the rex commands should be a written as a single rex command.  i mean, after first rex command, pls write rex try to match the extra characters and then write the 2nd rex command an... See more...
Hi @Neekheal all the rex commands should be a written as a single rex command.  i mean, after first rex command, pls write rex try to match the extra characters and then write the 2nd rex command and then write rex command to match the extra characters, etc..  index=khisab_ustri sourcetype=sosnmega "*BP Tank: Bat from surface = *K00C0*" |dedup _time |rex field=_raw "(?ms)(?<time_string>\d{12})BP Tank: Bat from Surface .*K00C0\d{21}(?<kmu_str>\d{2})*" |rex field=_raw "(?<PC_sTime>\d{12})CSVSentinfo:L00Show your passport*" to index=khisab_ustri sourcetype=sosnmega "*BP Tank: Bat from surface = *K00C0*" |dedup _time |rex field=_raw "(?ms)(?<time_string>\d{12})BP Tank: Bat from Surface .*K00C0\d{21}(?<kmu_str>\d{2}) <<< some rex commands to match >>> "(?<PC_sTime>\d{12})CSVSentinfo:L00Show your passport*"  
Hi @LearningGuy on the step 2, pls uncheck the "Open in new tab". sometimes this creates the 2 tabs.  thanks. 
Thank you for your input. Proofpoint does not much useful information about this product. We're planning to move away from it so it's not worth the effort. thank you for your input. 
Hi @mmg245 .. troubleshooting this depends on your custom app only.  the only troubleshooting that come to my mind is to check the internal logs for any warnings or errors related to the app or repo... See more...
Hi @mmg245 .. troubleshooting this depends on your custom app only.  the only troubleshooting that come to my mind is to check the internal logs for any warnings or errors related to the app or report.  if nothing works, if its ok, as a last troubleshooting step, maybe try restarting the Splunk, thanks
That's right. I never liked that solution either, and we have plans to move away from it in the near future. Thank you for your input. 
Hi, I am having some problem to understand How to fetch multiline pattern in a single event. I have logfile in which I am searching this pattern which is scattered in multiple lines, 12345678910... See more...
Hi, I am having some problem to understand How to fetch multiline pattern in a single event. I have logfile in which I am searching this pattern which is scattered in multiple lines, 123456789102BP Tank: Bat from Surface = #07789*K00C0**************************************** 00003453534534534 ****after Multiple Lines*** 123456789107CSVSentinfo:L00Show your passport ****after Multiple Lines*** 123456789110CSVSentinfo Data:z800 ****after Multiple Lines*** 123456789113CSVSentinfoToCollege: ****after Multiple Lines*** 123456789117CSVSentinfoFromCollege: ****after Multiple Lines*** 123456789120CSVSentinfo:G7006L ****after Multiple Lines*** 123456789122CSVSentinfo:A0T0 ****after Multiple Lines*** 123456789124BP Tank: Bat to Surface L000passportAccepted   I have tried below query to find all the occurrences but no luck index=khisab_ustri  sourcetype=sosnmega  "*BP Tank: Bat from surface = *K00C0*" |dedup _time |rex field=_raw "(?ms)(?<time_string>\d{12})BP Tank: Bat from Surface .*K00C0\d{21}(?<kmu_str>\d{2})*" |rex field=_raw "(?<PC_sTime>\d{12})CSVSentinfo:L00Show your passport*" |rex field=_raw "(?<CP_sTime>\d{12})CSVSentinfo Data:z800*" |rex field=_raw "(?<MTB_sTime>\d{12})CSVSentinfoToCollege:*" |rex field=_raw "(?<MFB_sTime>\d{12})CSVSentinfoFromCollege:*" |rex field=_raw "(?<PR_sTime>\d{12})CSVSentinfo:G7006L*" |rex field=_raw "(?<JR_sTime>\d{12})CSVSentinfo:A0T0*" |rex field=_raw "(?<MR_sTime>\d{12})BP Tank: Bat to Surface =.+L000passportAccepted*" |table (PC_sTime- time_string),(CP_sTime- PC_sTime),(MTB_sTime-CP_sTime),(MFB_sTime-MTB_sTime),(PR_sTime- MFB_sTime),(JR_sTime-PR_sTime),(MR_sTime-JR_sTime) Sample Data is Sample Data: 123456789102BP Tank: Bat from Surface = #07789*K00C0**************************************** 00003453534534534 123456789103UniverseToMachine\0a<Ladbrdige>\0a <SurfaceTake>GOP</Ocnce>\0a <Final_Worl-ToDO>Firewallset</KuluopToset>\0a</ 123456789105SetSurFacetoMost>7</DecideTomove>\0a <TakeaKooch>&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;</SurfaceBggien>\0a <Closethe Work>0</Csloethe Work>\0a 123456789107CSVSentinfo:L00Show your passport 123456789108BP Tank: Bat from Surface = close ticket 123456789109CSVSentinfo:Guide iunit 123456789110CSVSentinfo Data:z800 123456789111CSVGErt Infro"8900 123456789112CSGFajsh:984 123456789113CSVSentinfoToCollege: 123456789114CSVSentinfo Data:z800 123456789115CSVSentinfo Data:z800 123456789116Sem startedfrom Surface\0a<Surafce have a data>\0a <Surfacecame with Data>Ladbrdige</Ocnce>\0a <Ladbrdige>Ocnce</Final_Worl>\0a <KuluopToset>15284</DecideTomove>\0a <SurafceCall>\0a <wait>\0a <wating>EventSent</SurafceCall>\0a </wait>\0a </sa>\0a</Surafce have a data>\0a\0a 123456789117CSVSentinfoFromCollege: 123456789118CSVSentinfo:sadjhjhisd 123456789119CSVSentinfo:Loshy890 123456789120CSVSentinfo:G7006L 123456789121CSVSentinfo:8shhgbve 123456789122CSVSentinfo:A0T0 123456789123CSVSentinfo Data:accepted 123456789124BP Tank: Bat to Surface L000passportAccepted
Dashboard Studio working with Reports and Time Range @sainag_splunk  I am currently using the new dashboard studio interface, they make calls to saved reports in Splunk. Is there... See more...
Dashboard Studio working with Reports and Time Range @sainag_splunk  I am currently using the new dashboard studio interface, they make calls to saved reports in Splunk. Is there a way to have time range work for the dashboard, but also allow it to work with the reports? The issue we face is  we are able to set the reports in the studio dashboard, but the default is that they are stuck as static reports. how can we add in a time range input that will work with the dashboard and the reports? The users who are viewing this dashboard are third party and people that we do not want to give access to the Index (example... outside of the Org users) hence the reason the dashboard used saved reports where its viewable, but like I mentioned we faced the issue of changing the Time range picker since the saved reports are showing in a static, where we wish to make it  change as we specify a time range with the Input. we are trying to not give third party users access to Splunk Indexes Also tried looking into Embedded reports but found " Embedded reports also cannot support real-time searches."
Please share the source code of your dashboard in a code block </>