All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I just got an event that said  Severity: Error Type: CONTROLLER_METADATA_LIMIT_REACHED Time: 02/23/23 23:37:23 Summary : Limit Reached for: THREAD_TASK;Scope:ACCOUNT;Id:2;Limit:1000 ... See more...
Hi All, I just got an event that said  Severity: Error Type: CONTROLLER_METADATA_LIMIT_REACHED Time: 02/23/23 23:37:23 Summary : Limit Reached for: THREAD_TASK;Scope:ACCOUNT;Id:2;Limit:1000 Does this event affect our controller performance or Data Collection and how to work with this issue? I found a similar issue for Business Transaction  but nothing for THREAD_TASK in this discussion: https://community.appdynamics.com/t5/Controller-SaaS-On-Premises/Getting-CONTROLLER-METADATA-REGISTRATION-LIMIT-REACHED-error/td-p/29747 Rgrds, Ruli
I have a few spreadsheets that are ingested into Splunk daily.  What is the best method to refresh the data, so I don't end up with duplicates? I am looking to do something like this: Today: Ingest... See more...
I have a few spreadsheets that are ingested into Splunk daily.  What is the best method to refresh the data, so I don't end up with duplicates? I am looking to do something like this: Today: Ingest spreadsheet.csv Tomorrow: delete previous data for spreadsheet.csv and then ingest new data Thanks, Garry
index=* "ORC from FCS completed" namespace="dk1371-b" index=* "ORC from ROUTER completed" namespace="dk1692-b" index=* "ORC from SDS completed." namespace="dk1399-b" Above query working fine , ... See more...
index=* "ORC from FCS completed" namespace="dk1371-b" index=* "ORC from ROUTER completed" namespace="dk1692-b" index=* "ORC from SDS completed." namespace="dk1399-b" Above query working fine , ------------------------------------------------------------------------------------------------------ however when am using below its not providing any data    index=* "ORC from FCS completed" namespace="dk1371-b" AND namespace="dk1399-b" Because ORC from "" is different for namespaces    i have below problem statement 1. I would like to prepare single query where i can use all namespaces like dk1371-b , dk1399-b etc . . . . 2 . In single search i would like have FCS/SDS  "ORC from FCS completed" "ORC from SDS completed"        
Hi, I am trying to figure out how to use join to table the results from 2 searches. sourcetype=AAD_MSGraph_UserData AAD_OnPremSID AAD_Email AAD_UserType AAD_LastSignInDateTime AAD_LastNonIn... See more...
Hi, I am trying to figure out how to use join to table the results from 2 searches. sourcetype=AAD_MSGraph_UserData AAD_OnPremSID AAD_Email AAD_UserType AAD_LastSignInDateTime AAD_LastNonInteractiveSignInDateTime AAD_LastPWChange sourcetype=AD_UserData AD_SID AD_UserPrincipalName AD_LastLogon JOIN ON: AAD_OnPremSID AND AD_SID TABLE RESULTS: AAD_OnPremSID, AAD_Email, AAD_UserType, AAD_LastPWChange, AAD_LastSignInDateTime, AAD_LastNonInteractiveSignInDateTime, AD_LastLogon   Thanks! Garry
Hi Splunkers, I'm working on two conditions where I need to use condition eval statement. Some filters that I need to add for every condition before I do eval. Please help me in achieving this. ... See more...
Hi Splunkers, I'm working on two conditions where I need to use condition eval statement. Some filters that I need to add for every condition before I do eval. Please help me in achieving this. Condition 1: Filters to be applied before: id is not "N/A"  AND risk="Critical" AND risk_factor="critical" After satisfying above conditions, I have to create a field called score. eval score=IF(insurance="Y",  instate="Y", age_requirements="y",  30, 60) Condition 2: Filters to be applied before: id is not "N/A"  AND risk="Critical" AND risk_factor="high" After satisfying above conditions. Add to the newly existing field "score" eval score=IF(insurance="Y",  instate="Y", age_requirements="y",  60, 90) TIA.
index=mail sender!="postmaster@groupncs.onmicrosoft.com" | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | lookup all_email_provid... See more...
index=mail sender!="postmaster@groupncs.onmicrosoft.com" | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | lookup all_email_provider_domains domain AS RecipientDomain output domain as domain_match2 | where isnotnull(domain_match2) | stats values(recipient) values(subject) earliest(_time) AS "Earliest" latest(_time) AS "Latest" count by RecipientDomain sender | sort -count | convert ctime("Latest") | convert ctime("Earliest")   original command  above   modify command below    index=mail sender!="postmaster@groupncs.onmicrosoft.com" | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | lookup all_email_provider_domains domain AS RecipientDomain output domain as domain_match2 | where isnotnull(domain_match2) | table sender recipient subject DateTime | sort recipent == 1 | where recipient == 1 | convert ctime(DateTime)     when  i use where, there is no results showing.  i only want to show results of a single recipient. if there are many do not show it .  
Hello, I am trying to match the start of a path in httpRequest.uri, as seen here: index=xyz source=xyz | spath "httpRequest.headers{}.value" | search "httpRequest.headers{}.value"="application/j... See more...
Hello, I am trying to match the start of a path in httpRequest.uri, as seen here: index=xyz source=xyz | spath "httpRequest.headers{}.value" | search "httpRequest.headers{}.value"="application/json" | spath "httpRequest.uri" | regex "^/public*" | stats count by "httpRequest.uri" | sort -count  Unfortunately, it isn't working. Can someone point out what I am doing wrong here? If I get rid of the caret, the regex works, but it matches anywhere within the field’s string value. I need to start from the beginning of the string. Thank you so much in advance!  
In the log there are events like - {"submitterType":"Others","SubID":"App_4-45887-02232023"} {"submitterType":"Others","SubID":"App_5-45892-02232023"}   I want a report showing - App_4-4588... See more...
In the log there are events like - {"submitterType":"Others","SubID":"App_4-45887-02232023"} {"submitterType":"Others","SubID":"App_5-45892-02232023"}   I want a report showing - App_4-45887-02232023 App_5-45892-02232023   Thanks!
Hello, I have the following query that shows the results of all the values from the splunk events that matched with the values in the lookup table; however I would also like to display those values ... See more...
Hello, I have the following query that shows the results of all the values from the splunk events that matched with the values in the lookup table; however I would also like to display those values in the lookup table that are not present in the splunk events: | metadata type=hosts index=_internal | rex field=host "(?<host>.+)--.+)" | lookup mylookup Name as host OUTPUT Name "IP Address" as IP Classification "Used for" as used_for | fillnull value="No match" | search Classification=Production used_for!=*Citrix* used_for!=*Virtualization* | stats c by host,Name,IP,Classification,used_for | fields - c How can I show both matched and unmatched values?
I have logs (Azure logs) that have two time fields, StartTime and ExpirationTime. Example: index=azure sourcetype=my_sourcetype | table StartTime ExpirationTime role user I want to take the user... See more...
I have logs (Azure logs) that have two time fields, StartTime and ExpirationTime. Example: index=azure sourcetype=my_sourcetype | table StartTime ExpirationTime role user I want to take the user and see if the user had a failed login attempt in another index / sourcetype between the two time fields StartTime and ExpirationTime. Any help would be greatly appreciated.
I have a subsearch, and am trying to use the value of a field I extracted in an inner search, to check if that value exists anywhere in _raw for the results of my outer search. Current search: inde... See more...
I have a subsearch, and am trying to use the value of a field I extracted in an inner search, to check if that value exists anywhere in _raw for the results of my outer search. Current search: index=my_index  | append      [ searchindex=my_index  "RecievedFileID"     | rex field=_raw  "RecievedFileID\s(?<file_id>\w*)"      | fields file_id ]  | search file_id   I can confirm the regex is working, but cant figure out how to check _raw for any presence of the value of file_id. The logic I'm looking for on the last line is essentially | where _raw contains the value of file_id   Any assistance is appreciated.
Hi guys! I need a help with a time problem. So  my structure is the following: i have many agent installed  on Windows machine that collects some data , i have a heavy forwarder thant handles the ... See more...
Hi guys! I need a help with a time problem. So  my structure is the following: i have many agent installed  on Windows machine that collects some data , i have a heavy forwarder thant handles the universals and forward data to an enterprise instances.  My issue is that one of the server where the universal is installed has a time different from the other machine and from the heavy forwarder, in particular -1h .  So when i use search and alerts in real_time or in 5/10 minutes range i miss all the events related to that machine. I would like all events to take as _time the system time of the enterprise instances (index_time)  or at least the heavy forwader system time.  I tried to change the props.conf and inserting date_config = current at every level but nothing change. It's ok also to have a custom configuration that add + 1h to that specif host  as long as the _time field is alligned with other machine.   Some assumption: All the machine are in the same country, the particular machine has different clock setting and can't be changed. The event that generates the universal contain always a timestamp of the specif machine but we don't want it as _time. 
I'm looking at a very large set of data that separates transactions by product. I've performed some relatively straightforward stats commands on the aggregate data set, but now I'd like to extract a ... See more...
I'm looking at a very large set of data that separates transactions by product. I've performed some relatively straightforward stats commands on the aggregate data set, but now I'd like to extract a similar set of data as it relates to unique accounts.  For example, I want to look at stats related to products, but across unique accounts rather than accounts as a total to give insight into how specific accounts behave. For the purposes of this, let y be product and x be accountHash.  In a splunk query, I could extract the distinct account numbers from the data set by product doing the following:  index=index source=source sourcetype=sourcetype product=productValue  | fields product, accountHash | lookup productCode AS product OUTPUT productDescription | stats  DC(accountHash) as uniqueAccounts by productDescription What if I wanted to look at say, stats count as Volume, avg(transactionValue), etc. across unique accounts? Can I then aggregate the total by productDescription? I know that I could do something like this: index=index source=source sourcetype=sourcetype product=productValue  | fields product, accountHash | lookup productCode AS product OUTPUT productDescription | stats  count as Volume, avg(transactionValue) as avgTranValue by accountHash, product But this would give me a dataset with too many rows to be meaningful. Is it possible to create statistics by unique accountHash values, and then tie those to a product? I don't need to see the individual accountValues, but I'd like to compare statistics across the aggregate total, which would likely skew the statistics towards accounts that use their accounts the most.  Could I do something like  | stats by accountHash And then another stats command that gives me product results across distinct accounts? If the question isn't clear, let me know and I will try to rephrase.  
I'm using dashboard studio and have a geomap, I want to set the colors based on series, which I can do on a line or area using seriesColorsByField, but the documentation for map only has dataColors a... See more...
I'm using dashboard studio and have a geomap, I want to set the colors based on series, which I can do on a line or area using seriesColorsByField, but the documentation for map only has dataColors and seriesColors, both of which appear to be ordered, so if a value is not present the colors would shift. So how can I do similar to seriesColorsByField on a map graph?
I have an embedded pie chart where I'm trying to show something rather than "no results found" with the red exclamation mark, and this is making people think the report isn't working. I've tried seve... See more...
I have an embedded pie chart where I'm trying to show something rather than "no results found" with the red exclamation mark, and this is making people think the report isn't working. I've tried several methods to address this, but I can't get the results I would like.   Query and results: | inputlookup my_lookup | search Exposure=External | stats count by Status | eval pie_slice = count + " " + Status | fields pie_slice, count   Is it possible to add something to the query so that when there are zero results I get this?
Hello Members, Here at the company, we are going to carry out the total migration of Splunk Enterprise, which is currently in AWS Argentina, to AWS Norte Virginia. I would like to ask for some help... See more...
Hello Members, Here at the company, we are going to carry out the total migration of Splunk Enterprise, which is currently in AWS Argentina, to AWS Norte Virginia. I would like to ask for some help here, which are the best practices to follow for this type of migration and/or which server do we start transferring data through, heavy forwarder, indexer, search head, deployment server? Would there be any difference depending on the order? We have a large environment. As a side note, let's take a SnapShot of the environment before starting the migration. We are studying to use CloudEndure for this job, would it be the best option?
Hello  I have a question because I'm in trouble.  `EasyVistaGeneric` "Statut" = "En service" AND ("Identifiant réseau"="IMP*" OR "Identifiant réseau"="ECR*" OR "Identifiant réseau"="PCW*") |dedup... See more...
Hello  I have a question because I'm in trouble.  `EasyVistaGeneric` "Statut" = "En service" AND ("Identifiant réseau"="IMP*" OR "Identifiant réseau"="ECR*" OR "Identifiant réseau"="PCW*") |dedup "Identifiant réseau" |eval entité=mvindex(split('Entité (complète)',"/"),0) | timechart span=1y count by entité useother=f usenull=f   I want to combine the results of  entité : "Commune de Toulon"  + "METROPOLE TPM" +" MTPM" + "Toulon" in a same field that we can named as RESULT : -> so I want to have : RESULT ="Commune de Toulon"  + "METROPOLE TPM" +" MTPM" + "Toulon"   Can you help me please ?    Thanks
Hi Splunkers, Reaching out for help This is a sample _raw event:  12.23.454, abcd, 12.34.45,abc@gmail.com,"[EXTERNAL] 300,000+ software product demos",SEND,OK i want to split  this by using t... See more...
Hi Splunkers, Reaching out for help This is a sample _raw event:  12.23.454, abcd, 12.34.45,abc@gmail.com,"[EXTERNAL] 300,000+ software product demos",SEND,OK i want to split  this by using the split command ,  using  comma as a delimiter  and assign to different fields. However,  "EXTERNAL] 300,000+ software product demos"  is a single field   and i dont want it to be split into multiple fields  In few  other events, comma is not present . For instance: 12.23.454, abcd, 12.34.45,abc@gmail.com,  "[EXTERNAL] 300000+ software product demos"  ,SEND,OK   How do i ensure that these values are assigned to the field in the events.  "EXTERNAL] 300,000+ software product demos" "[EXTERNAL] 300000+ software product demos"   Thanks for your help         
Hello I have a question I am working on this map. However, when "there are no resulst returned", I want to have the empty map and not this :    What can I do this ?  <dashboard... See more...
Hello I have a question I am working on this map. However, when "there are no resulst returned", I want to have the empty map and not this :    What can I do this ?  <dashboard version="1.1"> <label>HPE IMC</label> <row> <panel> <title>La liste des alarmes</title> <viz type="location_tracker_app.location_tracker"> <search> <query>index="imcfault" sourcetype="st_imcfault" severity=3 OR severity=4 | lookup switchs.csv ip AS sourceIp | rex field=location "^(?&lt;latitude&gt;.+?), (?&lt;longitude&gt;.+?)$" | eval latitude=if(isnull(latitude),"43.123888",latitude) | eval longitude=if(isnull(longitude),"5.953356",longitude) | table _time latitude longitude faultDesc</query> <earliest>-15m</earliest> <latest>now</latest> </search> <option name="height">800</option> <option name="location_tracker_app.location_tracker.interval">10</option> <option name="location_tracker_app.location_tracker.showTraces">0</option> <option name="location_tracker_app.location_tracker.staticIcon">none</option> <option name="location_tracker_app.location_tracker.tileSet">light_tiles</option> <option name="refresh.display">progressbar</option> </viz> </panel> </row> </dashboard>  
I have a situation where I have a multi-value field that can contain anywhere from 1 to 2000 or more values in a day.  each value is exactly 38 characters long.  Each 38 character string is a GUID fo... See more...
I have a situation where I have a multi-value field that can contain anywhere from 1 to 2000 or more values in a day.  each value is exactly 38 characters long.  Each 38 character string is a GUID for another application, and that application can only accept up to 1000 characters at a time.  What I'd like to do is chunk the strings together in complete blocks of 20 which would be 760 characters per block, and then call them by mvindex, but I haven't figured out how to do this in eval so that it will create it whether I have 1 string, 23 strings, or 900 strings to evaluate, since that is always going to be the unknown variable.  Any assistance on how to solve this would be very helpful.