All Topics

Top

All Topics

Hello everyone,   We found that the VM has more RAM allocated currently we have 12GB of RAM for each Search Head, but that can be increase to 48GB of RAM. I have been reading and increase the capa... See more...
Hello everyone,   We found that the VM has more RAM allocated currently we have 12GB of RAM for each Search Head, but that can be increase to 48GB of RAM. I have been reading and increase the capacity of the Search Heads can affect the indexer nodes: An increase in search tier capacity corresponds to increased search load on the indexing tier, requiring scaling of the indexer nodes. Scaling either tier can be done vertically by increasing per-instance hardware resources, or horizontally by increasing the total node count. And that make sense, currently the environment has more searches than indexers and I think increase the capacity of the SH can overwhelm the  indexers. Current environment: 3 SH (cluster) and 2 Indexers(cluster). I would appreciate any recommendation to do this as good as possible and be able to use the memory allocated.   Kind Regards.
Splunkers, I want to get in Microsoft-Windows-PowerShell/Operational logs into Splunk. There is no default setting for it in the default/inputs.conf file. I think this is the answer: [WinEventLog:... See more...
Splunkers, I want to get in Microsoft-Windows-PowerShell/Operational logs into Splunk. There is no default setting for it in the default/inputs.conf file. I think this is the answer: [WinEventLog://Microsoft-Windows-PowerShell/Operational] disabled=0  
Hello Splunkers,  I've an issue with my event time configuration. It has incorrect timestamp. Below are my props settings..it doesn't seem to be working. Please Advise TIME_FORMAT = %Y-%m-%d %H:%M:... See more...
Hello Splunkers,  I've an issue with my event time configuration. It has incorrect timestamp. Below are my props settings..it doesn't seem to be working. Please Advise TIME_FORMAT = %Y-%m-%d %H:%M:%S TIME_PREFIX = ^ TZ = UTC-4 MAX_TIMESTAMP_LOOKAHEAD = 20    Sample log format Time                                                                      Event 6/27/21 8:30:56.000 PM                #Software: banana Internet Information Services 19.0                                                                    #Version: 10.0                                                                    #Date: 2021-06-27 20:32:46                                                                    #Fields: Sacramento is the capital of California 6/27/21 8:30:56.000 PM                 #Software: pineapple Internet Information Services 39.0                                                                     #Version: 12.0                                                                     #Date: 2021-06-27 20:32:46                                                                     #Fields: Austin is the capital of Texas    
Hello, When I plot a chart between two columns, I am not able to see all the values on the x-axis instead only the name of the column is visible. I have around 180 values in the x-axis column. How ... See more...
Hello, When I plot a chart between two columns, I am not able to see all the values on the x-axis instead only the name of the column is visible. I have around 180 values in the x-axis column. How do I solve the problem. 
Hello everyone, I have been reading about the way Splunk can audit the changes at the configuration files and I found this as a possibility https://docs.splunk.com/Documentation/Splunk/8.2.2/Trou... See more...
Hello everyone, I have been reading about the way Splunk can audit the changes at the configuration files and I found this as a possibility https://docs.splunk.com/Documentation/Splunk/8.2.2/Troubleshooting/WhatSplunklogsaboutitself   But even though the documentation said is enabled by default my Splunk instance is not logging anything into that log Do you know what I should be doing to track it?   Current Version 8.2.2 Cluster environment Linux   Thank you in advance.
I am trying to create a dynamic input using the dynamic search option. Can this search use a token in it? So far I've tried and haven't had any success.
I am trying to monitor license warnings but I am not sure when the Splunk License Day ends and rolls over to the next day. Are all Splunk Licenses set to UTC by default? Please advise. Thank you
I have rows in the form: ID Field1 Field2 Field3   And I would like to create a histogram that shows the values of all three fields. I can make one for Field1 by doing stats... See more...
I have rows in the form: ID Field1 Field2 Field3   And I would like to create a histogram that shows the values of all three fields. I can make one for Field1 by doing stats count by Field1 span=1000 but I can't seem to figure out how I would get the other values into the same table. Do I need to do multiple searches and join them? How would I go about doing that?
Hi, I installed both Spycloud add-on and app latest versions in Splunk cloud. When I tried the setup, it does not seem to be working. Does these app and add-on are supported in Cloud?  Thanks
I have been tasked with implementing a system to monitor our application and alert whenever a page load takes longer than a specified threshold. I need to be able to determine what is causing the slo... See more...
I have been tasked with implementing a system to monitor our application and alert whenever a page load takes longer than a specified threshold. I need to be able to determine what is causing the slow performance (application vs database vs infrastructure) and create a support ticket routed to the appropriate time. It looks like there are two potential options for this on the Splunk products page: Splunk Observability Cloud or Splunk APM. Is anyone able to advise me on which of these products would be a better fit for what I'm looking to do?
Hello, team! I need your help with my search.  I have a search which collects the list of ip-addresses, and next I need to check if there is event in other index with this ip-address. if there is... See more...
Hello, team! I need your help with my search.  I have a search which collects the list of ip-addresses, and next I need to check if there is event in other index with this ip-address. if there is a corresponding event, it's okay, if not - alert. How to implement it better?
Hi Team, I have created two panels My first panel details are: <query> <![CDATA[index=abc ns=blazegateway app_name=blazecrsgateway* "serviceResponseStatus" $Ope$ $caller$ $status$ |rex field=_r... See more...
Hi Team, I have created two panels My first panel details are: <query> <![CDATA[index=abc ns=blazegateway app_name=blazecrsgateway* "serviceResponseStatus" $Ope$ $caller$ $status$ |rex field=_raw "operation:(?P<Operation>.*), serviceResponseStatus" |rex field=_raw "caller:(?P<Caller>.*) =" |rex field=_raw "serviceResponseTime\(ms\)=(?P<Response_Time>.*)" | eventstats count by Caller|rename Caller as "GRS Caller" |lookup ApplicationRef.csv GRSCaller as "GRS Caller" OUTPUT DisplayName |rename "GRS Caller" as "GRSCaller" |eval CallerName=If(isnull(DisplayName),GRSCaller,DisplayName) | table CallerName Operation Response_Time serviceResponseStatus date|rename CallerName as "GRS Caller" | rename date as "Date" | rename serviceResponseStatus as "Response_Status"|sort - Date]]> <drilldown> <set token="show_panel1">true</set> <set token="selected_value">$click.value$</set> </drilldown>   From this I am getting details as below: GRS Caller   Operation   ResponseTime   Status         Date OneForce     ls                       286 ms                   Success    2022-06-27 OneForce     dmrupload      381 ms                    Failure   2022-06-27   I want when I click on 1st row the detailed description of 1st row should come. Can someone guide me what query I can make for 2nd panel extraction Currently I have make this but its not working   <row> <panel depends="$show_panel1$"> <table> <title>Caller Details1</title> <search> <query>abc ns=blazegateway app_name=blazecrsgateway* "serviceResponseStatus" $Ope$ $caller$ $status$ $selected_value$ </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="count">100</option> </table> </panel> </row>  
Filtering logs before indexing using transforms.conf and props.conf creates ingestion latency problem.  
Hi, I wanna plot a table to indicate the scanner status of Gitlab repos in the dashboard, like below: (1 means enabled, 0 means no used). fullPath SAST SAST_IAC DAST DEPENDENCY_SCANNING CONT... See more...
Hi, I wanna plot a table to indicate the scanner status of Gitlab repos in the dashboard, like below: (1 means enabled, 0 means no used). fullPath SAST SAST_IAC DAST DEPENDENCY_SCANNING CONTAINER_SCANNING SECRET_DETECTION COVERAGE_FUZZING API_FUZZING CLUSTER_IMAGE_SCANNING repos1 1 0 0 1 0 1 0 0 0 repos2 1 1 1 1 1 1 0 0 0 repos3 1 0 0 1 1 1 1 1 0   And my raw data stream look like below:   {"fullPath": "repos1", "securityScanners": {"available": ["SAST", "SAST_IAC", "DAST", "DEPENDENCY_SCANNING", "CONTAINER_SCANNING", "SECRET_DETECTION", "COVERAGE_FUZZING", "API_FUZZING", "CLUSTER_IMAGE_SCANNING"], "enabled": ["SAST", "DEPENDENCY_SCANNING", "SECRET_DETECTION"], "pipelineRun": ["SAST", "DEPENDENCY_SCANNING", "SECRET_DETECTION"]}} {"fullPath": "repos2", "securityScanners": {"available": ["SAST", "SAST_IAC", "DAST", "DEPENDENCY_SCANNING", "CONTAINER_SCANNING", "SECRET_DETECTION", "COVERAGE_FUZZING", "API_FUZZING", "CLUSTER_IMAGE_SCANNING"], "enabled": ["SAST", "DEPENDENCY_SCANNING", "SECRET_DETECTION"], "pipelineRun": ["SAST", "DEPENDENCY_SCANNING", "SECRET_DETECTION", "DAST"]}}   Could anyone help me build a search language to achieve the above?
Hi Team, While exploring Splunk documentation and few scenarios , noticed that there is Rest approach to extract  saved one.  But i would like to extract unsaved ( adhoc ) searches performed t... See more...
Hi Team, While exploring Splunk documentation and few scenarios , noticed that there is Rest approach to extract  saved one.  But i would like to extract unsaved ( adhoc ) searches performed to understand patterns and load 1. Unsaved searches performed on given index or all indexes along with the query used.  I found below threads which can be used to fetch saved searches  https://community.splunk.com/t5/Splunk-Search/How-can-I-get-a-list-of-all-saved-searches-from-all-apps-using/m-p/162615  https://community.splunk.com/t5/Splunk-Search/Listing-all-saved-searches-from-all-apps-via-REST-without/m-p/508688 Is there any Rest based query which can be used for extracting to find adhoc searches performed on splunk to understand load patterns.      
Hello Splunker, I'm currently working on a new use case and need some helps  I'm working on a HF receiving Microsoft Cloud Logs (with https://docs.splunk.com/Documentation/AddOns/released/MSCloudSe... See more...
Hello Splunker, I'm currently working on a new use case and need some helps  I'm working on a HF receiving Microsoft Cloud Logs (with https://docs.splunk.com/Documentation/AddOns/released/MSCloudServices) and I would like to forwards those logs to two differents TCP output (Splunk indexers), one with some fields anonymized, and the other without any index time transformation. Here is a schema to help you understand my problem : My thoughts : I currently have a inputs.conf configured on my HF to receive the logs from MS Cloud (with sourcetype set to mscs:azure:eventhub, I think it's compulsory to keep this sourcetype) Then I created props.conf & transforms.conf but should I put two TRANSFORMS-<class> in order to have two differents transforms depending on the destination ? My props.conf : [mscs:azure:eventhub] TRANSFORMS-anonymize = user-anonymizer My transforms.conf : [user-anonymizer] REGEX = ^(.*?)"\[{\\"UserName\\":[^,]*(.*) FORMAT = $1"###"$2 DEST_KEY = _raw Thanks a lot, Gaétan
Hi all, I am using splunk stream to receive radius and wondered if anybody has successfully modified the config files to successfully decode the AVP (97) "Framed-IPv6-Prefix" ? 
Hi , I have  4 fields and those need to be in a tabular format .Out of which one field has the ratings which need to be converter to column to row format with count and rest 3 columns need to be sam... See more...
Hi , I have  4 fields and those need to be in a tabular format .Out of which one field has the ratings which need to be converter to column to row format with count and rest 3 columns need to be same . I have tried using transpose and xyseries but not able to achieve in both . Ex : current table format Name Domain Area Rating Nsp -1 IT End user service H NSP-2 IT Mainframe M NTS-10 G&A ENT L NTL -05 EPP Distributed M WMC-04 AES corp L   How this can be changed to the below format using splunk search , Expected table format : Name Domain Area Rating(H) count Rating(M) count  Rating(L) count Nsp -1 IT End user service 1 0 0 NSP-2 IT Mainframe 0 1 0 NTS-10 G&A ENT 0 0 1 NTL -05 EPP Distributed 0 1 0 WMC-04 AES corp 0 0 0   Please let me know how to achieve this in using Splunk search.
Hi, I want to get back Token id from the Curl command below : curl -k -u UserName:Password -X POST https://0.0.0.0:8089/services/authorization/tokens But , I am  getting back : <?xml version="1.... See more...
Hi, I want to get back Token id from the Curl command below : curl -k -u UserName:Password -X POST https://0.0.0.0:8089/services/authorization/tokens But , I am  getting back : <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Unauthorized</msg> </messages> </response>   Can anyone help how to resolve this case , what is missing in Curl command or what is wrong ?  
Hello, So I have 2 problems I have an alert that fire emails whenever FILE_NAME=FILE_ERROR, and when that happen, I have to merge it with a list of users from internal with USER_TYPE=Internal . E... See more...
Hello, So I have 2 problems I have an alert that fire emails whenever FILE_NAME=FILE_ERROR, and when that happen, I have to merge it with a list of users from internal with USER_TYPE=Internal . Each table are like so Main table: _time FILE_NAME (time) FILE_ERROR   Table that I want to merge with USER_TYPE USER_EMAIL USER_PHONE Internal internal1@gmail.com 1234 Internal internal2@gmail.com 5678   I want the result to show like this _time FILE_NAME USER_TYPE USER_EMAIL USER_PHONE (time) FILE_ERROR Internal internal1@gmail.com 1234 (time) FILE_ERROR Internal internal2@gmail.com 5678   So basically I want to fill all rows of FILE_NAME and (time) in the main table with the other table so I can use Alert for each result and send emails using $result.USER_EMAILS$. I have try append, appendcols and join but it only have 1 row that had value.  Or I want the result table look like this _time FILE_NAME USER_TYPE USER_EMAIL USER_PHONE (time) FILE_ERROR Internal internal1@gmail.com, internal2@gmail.com 1234,5678   Does anyone have solution for this.