All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Alternatively, use fillnull. <your_initial_search> | fillnull account value="unverified" | stats count by account  
It is rather strange to use the exact same base search in a subsearch.  If nothing else, this reduces performance.  It is also strange that you have to use two consecutive transpose inside the subsea... See more...
It is rather strange to use the exact same base search in a subsearch.  If nothing else, this reduces performance.  It is also strange that you have to use two consecutive transpose inside the subsearch seemingly just to get a list of id_flux values.  I think you are looking for appendpipe, not append. index="bloc1rg" AND libelle IN (IN_PREC, OUT_PREC, IN_BT, OUT_BT, IN_RANG, OUT_RANG) earliest=-1mon@mon latest=-1d@d | stats max(_time) as last_time count by id_flux libelle | appendpipe [chart sum(count) over id_flux by libelle] | eventstats values(IN_*) as IN_* values(OUT_*) as OUT_* by id_flux | search libelle=* | eval IN_BT_OUT_BT=IN_BT+OUT_BT | eval IN_PREC_OUT_PREC=IN_PREC+OUT_PREC | eval IN_RANG_OUT_RANG=IN_RANG+OUT_RANG | search IN_BT_OUT_BT>=2 AND IN_PREC_OUT_PREC >=2 AND IN_RANG_OUT_RANG >=2 ``` the above is equivalent to search 1 ``` In fact, appendpipe can also help you determine the response times you are looking, if I am guessing your intention correctly: index="bloc1rg" AND libelle IN (IN_PREC, OUT_PREC, IN_BT, OUT_BT, IN_RANG, OUT_RANG) earliest=-1mon@mon latest=-1d@d | stats max(_time) as last_time count by id_flux libelle | appendpipe [chart sum(count) over id_flux by libelle] | eventstats values(IN_*) as IN_* values(OUT_*) as OUT_* by id_flux | search libelle=* | eval IN_BT_OUT_BT=IN_BT+OUT_BT | eval IN_PREC_OUT_PREC=IN_PREC+OUT_PREC | eval IN_RANG_OUT_RANG=IN_RANG+OUT_RANG | search IN_BT_OUT_BT>=2 AND IN_PREC_OUT_PREC >=2 AND IN_RANG_OUT_RANG >=2 ``` the above is equivalent to search 1 ``` | appendpipe [chart limit=0 max(last_time) over id_flux by libelle] | search NOT libelle=* | fields - libelle last_time | eval response_rang = OUT_RANG - IN_RANG | eval response_prec = OUT_PREC - IN_PREC | eval response_bt = OUT_BT - IN_BT For the purpose of getting the help you wanted from this forum, complex SPL - especially with multiple transpose, only adds barrier to volunteers' understanding of your real objective.  I suggest that you describe the basic data set, describe the desired outcome, and describe the logic between desired outcome and the data.  Illustrate with text tables and strings (anonymize as necessary).  
@juakashi  before executing the ./splencore.sh test command You need to make a couple of export SPLUNK_HOME=/opt/splunk export LD_LIBRARY_PATH=/opt/splunk/lib After that you can continue Validate... See more...
@juakashi  before executing the ./splencore.sh test command You need to make a couple of export SPLUNK_HOME=/opt/splunk export LD_LIBRARY_PATH=/opt/splunk/lib After that you can continue Validate & Test the Connection
We recently move to S2 and our initial retention was set to 6 months. A month after the migration we decided to reduced the retention to 3 months but did not see any reduction in storage in s3. Supp... See more...
We recently move to S2 and our initial retention was set to 6 months. A month after the migration we decided to reduced the retention to 3 months but did not see any reduction in storage in s3. Support found out that versioning was enabled in AWS by the PS engineer during the migration and that caused this issue. Updated this in indexes.conf remote.s3.supports_versioning =false" Now the new data which is rolled over is deleted but we still have old cruft remaining in s3 which is costing us heavily. Support wants us to delete the data manually by running commands from CLI. Isnt there a better way in doing this? Does AWS lifecycle rules work only for old data which is still lying there? What are the ways to get rid of this old data apart from removing it manually. 
@woodcock We recently move to S2 and our initital retention was set to 6 months. A month after the migration we decided to reduced the retention to 3 months but did not see any reduction in storage i... See more...
@woodcock We recently move to S2 and our initital retention was set to 6 months. A month after the migration we decided to reduced the retention to 3 months but did not see any reduction in storage in s3. Support found out that versioning was enabled in AWS by the PS engineer during the migration and that caused this issue. Now the new data which is rolled over is deleted but we still have old cruft remaining in s3 which is costing us heavily. Support wants us to delete the data manually by running commands from CLI. Isnt there a better way in doing this? Does AWS lifecycle rules work only for old data which is still lying there? What are the ways to get rid of this old data apart from removing it manually. 
Will you please help with 1 row will manage for all. Please share the code if it's Possible .
Hi   Actualy I trying to search data even the archived ones but as you can see in printscreen below I get only the 3 last month, because I think the data older than 3 months was archived.   C... See more...
Hi   Actualy I trying to search data even the archived ones but as you can see in printscreen below I get only the 3 last month, because I think the data older than 3 months was archived.   Could you explain me how to retrieve data older than 3 month in my case.   Regards
Hi, Is there any app in Splunk base to analyze the logs in my Splunk ES to stop the unwanted logs ingestion ? Thanks
Hi, Can anyone pls figure out from these  list of apps which of these apps from web logs are not required for investigation/needed for ingesting in to Splunk to save the license cost ? ssl windows... See more...
Hi, Can anyone pls figure out from these  list of apps which of these apps from web logs are not required for investigation/needed for ingesting in to Splunk to save the license cost ? ssl windows-remote-management web-browsing sap ms-office365-base google-base soap new-relic okta ms-onedrive-base windows-push-notifications dns-over-tls crowdstrike dns-over-https outlook-web-online ms-store paloalto-updates websocket apple-push-notifications gmail-base yahoo-web-analytics whatsapp-web naver-line hotmail http-proxy adobe-creative-cloud-base telegram-base ocsp pan-db-cloud windows-azure-base github-base apple-update deepl-base slack-base egnyte-base teamviewer-base google-meet facebook-chat concur-base google-docs-base qlikview paloalto-wildfire-cloud successfactors reddit-base bananatag google-analytics as2 cisco-spark-base viber-base jabber google-chat taobao appdynamics icloud-mail cloudinary-base zoom-base imgur-base webdav splashtop-remote zscaler-internet-access google-drive-web ms-onedrive-business liveperson discord salesforce-base tokbox quora-base paloalto-dns-security giphy-base vimeo-base giphy-downloading notion-base webex-base openai-base paloalto-cloud-identity zendesk-base paloalto-logging-service dailymotion paloalto-prisma-sdwan-control paloalto-shared-services cloudflare-warp sharepoint-online facebook-video   Thanks
@anissabnk so do you have ONE of each libelle per event, if so then how do you define response time - is it the TIME of the event, so BT time is OUT time - IN time and is there only a SINGLE one of e... See more...
@anissabnk so do you have ONE of each libelle per event, if so then how do you define response time - is it the TIME of the event, so BT time is OUT time - IN time and is there only a SINGLE one of each libelle per flux? Try something like this   index="bloc1rg" AND libelle IN (IN_PREC, OUT_PREC, IN_BT, OUT_BT, IN_RANG, OUT_RANG) earliest=-1mon@mon latest=-1d@d | stats max(eval(if(libelle="IN_PREC", _time, null()))) as IN_PREC_TIME max(eval(if(libelle="OUT_PREC", _time, null()))) as OUT_PREC_TIME max(eval(if(libelle="IN_BT", _time, null()))) as IN_BT_TIME max(eval(if(libelle="OUT_BT", _time, null()))) as OUT_BT_TIME max(eval(if(libelle="IN_RANG", _time, null()))) as IN_RANG_TIME max(eval(if(libelle="OUT_RANG", _time, null()))) as OUT_RANG_TIME by id_flux | eval response=(OUT_PREC_TIME-IN_PREC_TIME) + (OUT_BT_TIME-IN_BT_TIME) + (OUT_RANG_TIME-IN_RANG_TIME) | fields - *_TIME   so you are collecting all the event times for each of the event types by flux id and then just calculating the  response time at the end.  
Use a change stanza in the input, e.g. <input type="text" token="pre_domain"> ... <change> <eval token="actual_domain">replace($pre_domain$,"\\[\\.\\]",".")</eval> </change> </input>
Have you tried any of the eval json functions https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/JSONFunctions json_extract or json_array_to_mv?
@juakashi - Please check the following things: Ensure you have the latest version of the Add-on. Ensure you put the certificate as described and in the described path. Ensure the certificate file... See more...
@juakashi - Please check the following things: Ensure you have the latest version of the Add-on. Ensure you put the certificate as described and in the described path. Ensure the certificate file permissions are proper. (chmod) 600 for public cert files and 400 for private key files. Ensure the environment paths are set properly as described in the video.   I hope this helps!! Kindly upvote if it does!!!
I'm thinking of running a script(.BAT file) with an action in the report schedule. However, when I specify a batch file for the script and run it, but the script is repeatedly executed the same numb... See more...
I'm thinking of running a script(.BAT file) with an action in the report schedule. However, when I specify a batch file for the script and run it, but the script is repeatedly executed the same number of times as the number of search results. I want to set the script execution within the report schedule to once, regardless of the search results. What settings should I make? (ex. Advanced Edit properties)
Sorry  one last question.  Do you suggest us to shift to HF or use indexer?
yep, the props.conf on the UF is very very limited. SEDCMD on props.conf is only for HF or indexer etc.. 
did you create this on the HF, right? hehe no, that's why I started asking about HF. So UF cannot take this SEDCMD configs right?
>>> 1. Document mentioned that I have to create props.conf in /opt/splunk/deployment.apps/Splunk-TA-windows/local/   ---> created .....  did you create this on the HF, right? >>> 2. I just copi... See more...
>>> 1. Document mentioned that I have to create props.conf in /opt/splunk/deployment.apps/Splunk-TA-windows/local/   ---> created .....  did you create this on the HF, right? >>> 2. I just copied all lines with SEDCMD and cleared #  and just hoping to config should work. after updating the props.conf in Hf, did you restart the splunk service on the HF
Sorry for trouble, As I named myself... 1. Document mentioned that I have to create props.conf in /opt/splunk/deployment.apps/Splunk-TA-windows/local/   ---> created 2. I just copied all lines wit... See more...
Sorry for trouble, As I named myself... 1. Document mentioned that I have to create props.conf in /opt/splunk/deployment.apps/Splunk-TA-windows/local/   ---> created 2. I just copied all lines with SEDCMD and cleared #  and just hoping to config should work. All this changes made yesterday,  
SEDCMD is a big topic and your one line reply is not helping me/us.  maybe you should provide moooore details and ask precise questions.    upvotes/karma points are appreciated by all. thanks.