All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi there, I have a query like this:   message="A" | timechart count AS AA | appendcols [search message="B" | timechart count AS BB]    I want to plot this on the same chart with two lines, AA an... See more...
Hi there, I have a query like this:   message="A" | timechart count AS AA | appendcols [search message="B" | timechart count AS BB]    I want to plot this on the same chart with two lines, AA and BB. But AA have a length of 20 and BB has a length of 60. I believe this is why I am not seeing any line? if so, how can I pad AA with extra 40 zeros so that it is the same length as BB? Many thanks
I am relatively new to Splunk and I am trying to create a field that contains the field value  and its count into one merged field,  The pattern looks like this: 31/05/2023 22:40:29 (01) >> Ady... See more...
I am relatively new to Splunk and I am trying to create a field that contains the field value  and its count into one merged field,  The pattern looks like this: 31/05/2023 22:40:29 (01) >> Adyen Proxy::Proxy::RaiseValidResponse::Proxy event received -> Result : FAILURE ; Source : PROCESSPAYMENTFAILURE ; Message : Validation failed: Total amount is lower than configured min amount. Error Message = Validation failed: Total amount is lower than configured min amount. Register = 01 Store = tkg0452 host = tkg045201.store.tjxcorp.net source = C:\ProgramData\GlobalSTORE\AdyenLog_5_31_2023.log sourcetype = GSTR:Adyen:log I am trying to create one field  (e.g. Error and its count )  First need to do the count of current field "Error_Message" and then merge the count with the field value
  | eval ExitStatus=if(ExitStatus>0, 1, 0) | stats count by ExitStatus by Site   In the search query above, I am looking to count the ExitStatus field after evaluating it to 2 stats, 0 or 1; by... See more...
  | eval ExitStatus=if(ExitStatus>0, 1, 0) | stats count by ExitStatus by Site   In the search query above, I am looking to count the ExitStatus field after evaluating it to 2 stats, 0 or 1; by the Site, which is 4 stats.  Currently the search query isn't showing any statistics and I think its due to the two BY commands. Is there an alternative solution?
Hello I have injested CSV data in lookup. The common data is Service_Method in CSV and dt.entity.service_method in Splunk Query.  index="*dyn*" source="*:*dgf*" dt.entity.service_method=* metricId=... See more...
Hello I have injested CSV data in lookup. The common data is Service_Method in CSV and dt.entity.service_method in Splunk Query.  index="*dyn*" source="*:*dgf*" dt.entity.service_method=* metricId=builtin:service.keyRequest.count.total | stats avg(value) as "AvgValueCountTotal" count as "Total" by dt.entity.service_method I am trying to combine this 2 data set in one table.     
I have a problem using the timechart command with this query. if i use "table" it works, but with timechart it doesn't... anybody can help? |host..., source..., etc | table AREA, SUBID, CURRENT_TI... See more...
I have a problem using the timechart command with this query. if i use "table" it works, but with timechart it doesn't... anybody can help? |host..., source..., etc | table AREA, SUBID, CURRENT_TIMESTAMP | join AREA, SUBID, CURRENT_TIMESTAMP [|search source... EVENT_TYPE... | table AREA, SUBID, TXT, CURRENT_TIMESTAMP ] | eval message_id=AREA.SUBID." ".TXT | timechart count(message_id) as "No. of message" by message_id
Our application prints logs in json format . example  {"ts":"05 30 2023 10:30:00.013","th":"logging-metrics-publisher","level":"INFO ","logger":"LoggingMeterRegistry","msg":"jvm.threads.states{state... See more...
Our application prints logs in json format . example  {"ts":"05 30 2023 10:30:00.013","th":"logging-metrics-publisher","level":"INFO ","logger":"LoggingMeterRegistry","msg":"jvm.threads.states{state=timed-waiting} value=40 threads" } {"ts":"05 30 2023 10:30:00.012","th":"logging-metrics-publisher","level":"INFO ","logger":"LoggingMeterRegistry","msg":"jvm.threads.states{state=waiting} value=222 threads" } Some logs are displayed in json format and some are not . json format logs are generated by same code .  but i am not sure why some logs are shown in same block like this. we haven't put anything in props file  
Hi All, I would like to know what is the best way to simulate attacks within my organisation.  I cannot use Virtualbox due to a licensing issue but I do have vmware.  All the tutorials online s... See more...
Hi All, I would like to know what is the best way to simulate attacks within my organisation.  I cannot use Virtualbox due to a licensing issue but I do have vmware.  All the tutorials online show how to use the attack range on virtaulbox but not on vmware. Any help is much appreciated as this is vital to test our detections. Thank all
I have an active playbook that runs on most event types, but some get auto-closed and I would like to know if there's a way to end the playbook run if it the case gets closed before it finishes the p... See more...
I have an active playbook that runs on most event types, but some get auto-closed and I would like to know if there's a way to end the playbook run if it the case gets closed before it finishes the playbook run.
Dear community, I have this: index=*** sourcetype="*********** " Path=*******" | stats count by HTTPStatusCode. which will display codes 2**,4**, 5** then I used this code to calculate the f... See more...
Dear community, I have this: index=*** sourcetype="*********** " Path=*******" | stats count by HTTPStatusCode. which will display codes 2**,4**, 5** then I used this code to calculate the failure rate  index=***** sourcetype="*****" Path="*****" | stats count AS Total count(eval(HTTPStatusCode="200")) as Success | eval Failure = Total - Success | eval Percent_Failure = round((Failure/Total)*100)."%" | stats count by Percent_Failure as you can see above it worked fine showing 2% fail rate. so  I added it to my "Classic Dashboard" as a SingleValue and all is good.  but, to my frustration, when add it to my new Dashboard Studio. it shows this : singleValue : but when change it to table type: as you can see the logic is correct and it working fine. but why is it refusing to work as singlevalue in the  new dash studio? is it compatibility issue in the new studio? my goal is to is just to display a nice percentage of failures. I need your help please I am new to splunk     thanks,      
HI Team,   we want to upgrade the current version of splunk UF to compatible version . current UF version. 7.3.0 and Splunk enterorise is 8.1.0.   Please suggest what would be the best vers... See more...
HI Team,   we want to upgrade the current version of splunk UF to compatible version . current UF version. 7.3.0 and Splunk enterorise is 8.1.0.   Please suggest what would be the best version to upgrade UF with respect to splunk enterprise.  
Hi, I'm looking to improve performance and avoid the subsearch_maxout issue with a join on two source types. I'm joining on a one to many field 'Correlator', srctypeA has will have one unique entr... See more...
Hi, I'm looking to improve performance and avoid the subsearch_maxout issue with a join on two source types. I'm joining on a one to many field 'Correlator', srctypeA has will have one unique entry for 'Correlator' and srctypeB has multiple entries. So when my search on srctypeA returns a value for Correlator I want to timechart by a field in records with matching Correlator in srctypeB. So the current search below works, I'm looking for better performing options if anyone has any suggestions. index=index1 sourcetype=srctypeA field1=ABC* | join max=0 type=inner Correlator [ search index=index1 sourcetype=srctypeB ] | timechart span=1d count by field2
Hi, I have a dashboard that runs off the back of a report that is scheduled to run every minute. I need to amend the report schedule to run every minute between the specified hours AND every 5 min... See more...
Hi, I have a dashboard that runs off the back of a report that is scheduled to run every minute. I need to amend the report schedule to run every minute between the specified hours AND every 5 minutes outside those hours to save on resources. My current cron expression looks like this:   */1 * * * *   To run every minute between the specified hours, I could write something like this.   */1 3-8,12-20 * * *   The question I have is how to add a clause to schedule the search to run every 5 minutes outside those hours? Many thanks. 
Hello Splunkers, Here is my use-case : I am cloning some events that arrive to my Heavy Forwarder and then forward those cloned event to another Splunk (standalone - free trial) machine. I am abl... See more...
Hello Splunkers, Here is my use-case : I am cloning some events that arrive to my Heavy Forwarder and then forward those cloned event to another Splunk (standalone - free trial) machine. I am able to receive the logs on my targeted machine with the following configuration : On my HF (sender / forwarder) transforms.conf [srctype-clone] CLONE_SOURCETYPE = mynewsrctype REGEX = .* DEST_KEY = _TCP_ROUTING FORMAT = tcp_output_conf outputs.conf [tcpout:tcp_output_conf] server = <ip>:<port> sendCookedData = false On my Splunk standalone machine (receiver) inputs.conf [tcp://15601] disabled = false index = whatever_index sourcetype = mynewsrctype Based on that I have some questions... - First, I am receiving the logs but also some unwanted logs containing only "HB", does it correspond to Heart Beat ? Why do I received that ? - Should I use splunktcp instead of tcp on my receiver ? - Should I use enableS2SHeartbeat = true on my receiver ? - Should I use sendCookedData = true on my sender ? Thanks a lot for your help ! Regards, GaetanVP
I have 2 splunk queries:  Query1:              gives result Account User consumed Count A1 U1 2 A1 U2 1   Query2:                  gives result Accou... See more...
I have 2 splunk queries:  Query1:              gives result Account User consumed Count A1 U1 2 A1 U2 1   Query2:                  gives result Account User verified Count A1 U1 2 A1 U2 2   I wanted to do a left join on both to produce the following result: Account User consumed Count verified Count A1 U1 2 2 A1 U2 1  2 please help me with how this can be done. tried the left join but not getting the correct result. Thanks in advance.
Hi, May I know why i cannot access appdynamics support portal https://login.appdynamics.com/sso/authenticate/. thank you regards, Nur
In Dashboard Studio, I want to set the value of my token to only the values clicked of a specific column by using row.<fieldname>.value. But in my case, my customers don't like seeing column names li... See more...
In Dashboard Studio, I want to set the value of my token to only the values clicked of a specific column by using row.<fieldname>.value. But in my case, my customers don't like seeing column names like CustomerAddress or CustomerPhone, rather "Customer Address" or "Customer Phone". How can I pass these values as tokens in my dashboards since column names will contain spaces? I've tried double quotes, single quotes, escaping ,and URL encoding.
Hi everyone, I created a CSV lookup that has one column named "IP" which contains public IP list, and now I want to use these values inside my searches. For instance when I try to search incoming tra... See more...
Hi everyone, I created a CSV lookup that has one column named "IP" which contains public IP list, and now I want to use these values inside my searches. For instance when I try to search incoming traffic to our firewall then I want to use the lookup's values in the dest_ip field for my base search. Could you help me, please?
Hi All, I am working on search to search fields values from the lookup in an index and i have created the below search : index="nch_apps_nonprod" [| inputlookup WAS_ErrorCode.csv | table WASEven... See more...
Hi All, I am working on search to search fields values from the lookup in an index and i have created the below search : index="nch_apps_nonprod" [| inputlookup WAS_ErrorCode.csv | table WASEventCode | rename WASEventCode as query] | append [| inputlookup WAS_ErrorCode.csv | stats values(WASEventCode) as WASEventCode] | eventstats values(WASEventCode) as WASEventCode | where isnotnull(_raw) | eval WASEventCode=mvmap(WASEventCode,if(match(_raw,WASEventCode),WASEventCode,null())) | eventstats count as Ecount by WASEventCode | convert ctime(_time) as time| dedup WASEventCode|lookup WAS_ErrorCode.csv WASEventCode OUTPUT Severity2 Description2 Threshold2 WASEventCode | eval Threshold2=mvindex(Threshold2,0) | eval Severity2=mvindex(Severity2,0)| eval Description2=mvindex(Description2,0) | eval WASEventCode=mvindex(WASEventCode,0) | where Ecount>Threshold2 |eval message="mc_host= "+host+"; mc_object= "+source+"; mc_object_class= "+sourcetype+"; mc_origin= "+host+"; msg='"+WASEventCode+" : "+Description2+" with count as "+Ecount+"'; mc_tool_time= "+time+"; mc_origin_sev= "+Severity2+";"|table message   In the above search , i am searching  field values, WASEventcode from the lookup in index.Please note that this field in not present in index and i am doing text search and it is working fine.Now i want to search the condition which would search another field value from the lookup and it should search both field values which is Description2 of the WASEventcode but not able to do the same
Hello, I’m currently working with the Splunk API and I would love some clarification on a few points related to user provisioning. Through my testing of the Enterprise on-premise API, I’ve found m... See more...
Hello, I’m currently working with the Splunk API and I would love some clarification on a few points related to user provisioning. Through my testing of the Enterprise on-premise API, I’ve found multiple options for listing user-related entities, such as different namespaces, and methods for listing entities (users, roles, and permissions) in various services (admin and authentication services) as well as different platforms and products. I would appreciate any guidance regarding the user provisioning domain and how user provisioning works across Splunk products, platforms and services. Here are my questions: 1. Are REST API’s user provisioning entities, that I’ve found in the Enterprise platform, also available in the Cloud platform and across the services with identical structure? 2. Are user provisioning entities in Enterprise platform available in similar scope and on similar endpoints as in Cloud platform? 3. Are ACL settings, that I can find under different entities in API, tied to specific capabilities? If not, what defines the ACL?
We are now monitoring MS SQL database instances that have numerous schema, however our requirement is to simply monitor the performance of a single schema. I submitted a request at the help site for... See more...
We are now monitoring MS SQL database instances that have numerous schema, however our requirement is to simply monitor the performance of a single schema. I submitted a request at the help site for that, and I got the below response. Problem statement:  Wanted to monitor only dB instead of complete MSSQL dB server Next steps:  By default, agent will monitor all dBs in the MySQL server instance. The option available to restrict not to monitor certain databases are available. For this you need to exclude unwanted databases using "Exclude Databases" option in collector configuration. Refer: https://docs.appdynamics.com/appd/23.x/latest/en/database-visibility/add-database-collectors/configure-microsoft-sql-server-collectors and look for "Exclude Databases" option. However, omitting the database is not a solution because doing so will make it harder to monitor the remaining databases. Please give us your suggestions so we can solve this problem more quickly.