All Topics

Top

All Topics

Hello. I'm having some problem and I can't for the life of me figure out what goes wrong. I am running a search like this against two lookups (both lookup files has multiple columns): index=gateway... See more...
Hello. I'm having some problem and I can't for the life of me figure out what goes wrong. I am running a search like this against two lookups (both lookup files has multiple columns): index=gateway EventIDValue=gateway-check EventStatus=success | lookup assets_and_users.csv USER AS SourceUserName, ASSET AS EndpointDeviceName OUTPUTNEW USER, ASSET | lookup computer_objects.csv own_asset AS EndpointDeviceName OUTPUTNEW own_asset | where isnotnull(USER) OR isnotnull(ASSET) OR isnotnull(own_asset) AND own_asset!=EndpointDeviceName The idea is to check for a certain number of assets and users previously seen in our environment with the assets_and_users.csv lookup, and filter out assets that are currently managed by us with the computer_objects.csv lookup, so that I can see activity from the previously seen assets and users as well as assets not previously seen and that are not managed by us. However the first iteration of the search looked like this: index=vpn EventIDValue=gateway-check EventStatus=success | lookup assets_and_users.csv USER AS SourceUserName OUTPUTNEW USER | lookup computer_objects.csv own_asset AS EndpointDeviceName OUTPUTNEW own_asset | where isnotnull(USER) OR isnotnull(own_asset) AND own_asset!=EndpointDeviceName and that version gave me a couple thousand events. However, once I added the asset part as seen in the top query I got three events which doesn't make sense. I should if anything get more events than the first iteration (bottom query). Can someone spot where it goes wrong?
Hi, Getting below queue blocked and Errror in the HF.  don't know how to troubleshoot to fix this block queue issue.  can you help with the quick fix for this issue.     
When I run a search query I see that there are some fields which are present in interesting fields but not present in the event results. How is that achieved?
集計軸が違う場合にCount数を加工して出力する方法についてお教え下さい。 index「接続情報」のデータ項目は「タイムスタンプ、ユーザ名、接続プロトコル」になります。 またデータイメージは下記にタイムスタンプが付加された物になります。 ---------+---------- ユーザ名 | 接続プロトコル ---------+---------- ユーザA | http ユーザ... See more...
集計軸が違う場合にCount数を加工して出力する方法についてお教え下さい。 index「接続情報」のデータ項目は「タイムスタンプ、ユーザ名、接続プロトコル」になります。 またデータイメージは下記にタイムスタンプが付加された物になります。 ---------+---------- ユーザ名 | 接続プロトコル ---------+---------- ユーザA | http ユーザB | http ユーザA | ftp ユーザA | scp ユーザA | http ユーザB | http ・ ・ ・ ユーザC | ftp ---------+---------- 接続プロトコル"http"が多い為、"http"接続のみ1/2の件数にして表示したいと思っています。 接続プロトコルを軸に1か月単位で集計する場合は下記のサーチで行えました。 index="接続情報" | timechart span=mon eval(if("接続プロトコル"="http",count(eval("接続プロトコル"))/2,count)) by "接続プロトコル" ですが、ユーザ名を軸に集計する場合の方法が分かりません。 1つのサーチで実行可能でしょうか? よろしくお願いいたします。
Hi All, I just got an event that said  Severity: Error Type: CONTROLLER_METADATA_LIMIT_REACHED Time: 02/23/23 23:37:23 Summary : Limit Reached for: THREAD_TASK;Scope:ACCOUNT;Id:2;Limit:1000 ... See more...
Hi All, I just got an event that said  Severity: Error Type: CONTROLLER_METADATA_LIMIT_REACHED Time: 02/23/23 23:37:23 Summary : Limit Reached for: THREAD_TASK;Scope:ACCOUNT;Id:2;Limit:1000 Does this event affect our controller performance or Data Collection and how to work with this issue? I found a similar issue for Business Transaction  but nothing for THREAD_TASK in this discussion: https://community.appdynamics.com/t5/Controller-SaaS-On-Premises/Getting-CONTROLLER-METADATA-REGISTRATION-LIMIT-REACHED-error/td-p/29747 Rgrds, Ruli
I have a few spreadsheets that are ingested into Splunk daily.  What is the best method to refresh the data, so I don't end up with duplicates? I am looking to do something like this: Today: Ingest... See more...
I have a few spreadsheets that are ingested into Splunk daily.  What is the best method to refresh the data, so I don't end up with duplicates? I am looking to do something like this: Today: Ingest spreadsheet.csv Tomorrow: delete previous data for spreadsheet.csv and then ingest new data Thanks, Garry
index=* "ORC from FCS completed" namespace="dk1371-b" index=* "ORC from ROUTER completed" namespace="dk1692-b" index=* "ORC from SDS completed." namespace="dk1399-b" Above query working fine , ... See more...
index=* "ORC from FCS completed" namespace="dk1371-b" index=* "ORC from ROUTER completed" namespace="dk1692-b" index=* "ORC from SDS completed." namespace="dk1399-b" Above query working fine , ------------------------------------------------------------------------------------------------------ however when am using below its not providing any data    index=* "ORC from FCS completed" namespace="dk1371-b" AND namespace="dk1399-b" Because ORC from "" is different for namespaces    i have below problem statement 1. I would like to prepare single query where i can use all namespaces like dk1371-b , dk1399-b etc . . . . 2 . In single search i would like have FCS/SDS  "ORC from FCS completed" "ORC from SDS completed"        
Hi, I am trying to figure out how to use join to table the results from 2 searches. sourcetype=AAD_MSGraph_UserData AAD_OnPremSID AAD_Email AAD_UserType AAD_LastSignInDateTime AAD_LastNonIn... See more...
Hi, I am trying to figure out how to use join to table the results from 2 searches. sourcetype=AAD_MSGraph_UserData AAD_OnPremSID AAD_Email AAD_UserType AAD_LastSignInDateTime AAD_LastNonInteractiveSignInDateTime AAD_LastPWChange sourcetype=AD_UserData AD_SID AD_UserPrincipalName AD_LastLogon JOIN ON: AAD_OnPremSID AND AD_SID TABLE RESULTS: AAD_OnPremSID, AAD_Email, AAD_UserType, AAD_LastPWChange, AAD_LastSignInDateTime, AAD_LastNonInteractiveSignInDateTime, AD_LastLogon   Thanks! Garry
Hi Splunkers, I'm working on two conditions where I need to use condition eval statement. Some filters that I need to add for every condition before I do eval. Please help me in achieving this. ... See more...
Hi Splunkers, I'm working on two conditions where I need to use condition eval statement. Some filters that I need to add for every condition before I do eval. Please help me in achieving this. Condition 1: Filters to be applied before: id is not "N/A"  AND risk="Critical" AND risk_factor="critical" After satisfying above conditions, I have to create a field called score. eval score=IF(insurance="Y",  instate="Y", age_requirements="y",  30, 60) Condition 2: Filters to be applied before: id is not "N/A"  AND risk="Critical" AND risk_factor="high" After satisfying above conditions. Add to the newly existing field "score" eval score=IF(insurance="Y",  instate="Y", age_requirements="y",  60, 90) TIA.
index=mail sender!="postmaster@groupncs.onmicrosoft.com" | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | lookup all_email_provid... See more...
index=mail sender!="postmaster@groupncs.onmicrosoft.com" | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | lookup all_email_provider_domains domain AS RecipientDomain output domain as domain_match2 | where isnotnull(domain_match2) | stats values(recipient) values(subject) earliest(_time) AS "Earliest" latest(_time) AS "Latest" count by RecipientDomain sender | sort -count | convert ctime("Latest") | convert ctime("Earliest")   original command  above   modify command below    index=mail sender!="postmaster@groupncs.onmicrosoft.com" | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | lookup all_email_provider_domains domain AS RecipientDomain output domain as domain_match2 | where isnotnull(domain_match2) | table sender recipient subject DateTime | sort recipent == 1 | where recipient == 1 | convert ctime(DateTime)     when  i use where, there is no results showing.  i only want to show results of a single recipient. if there are many do not show it .  
Hello, I am trying to match the start of a path in httpRequest.uri, as seen here: index=xyz source=xyz | spath "httpRequest.headers{}.value" | search "httpRequest.headers{}.value"="application/j... See more...
Hello, I am trying to match the start of a path in httpRequest.uri, as seen here: index=xyz source=xyz | spath "httpRequest.headers{}.value" | search "httpRequest.headers{}.value"="application/json" | spath "httpRequest.uri" | regex "^/public*" | stats count by "httpRequest.uri" | sort -count  Unfortunately, it isn't working. Can someone point out what I am doing wrong here? If I get rid of the caret, the regex works, but it matches anywhere within the field’s string value. I need to start from the beginning of the string. Thank you so much in advance!  
In the log there are events like - {"submitterType":"Others","SubID":"App_4-45887-02232023"} {"submitterType":"Others","SubID":"App_5-45892-02232023"}   I want a report showing - App_4-4588... See more...
In the log there are events like - {"submitterType":"Others","SubID":"App_4-45887-02232023"} {"submitterType":"Others","SubID":"App_5-45892-02232023"}   I want a report showing - App_4-45887-02232023 App_5-45892-02232023   Thanks!
Hello, I have the following query that shows the results of all the values from the splunk events that matched with the values in the lookup table; however I would also like to display those values ... See more...
Hello, I have the following query that shows the results of all the values from the splunk events that matched with the values in the lookup table; however I would also like to display those values in the lookup table that are not present in the splunk events: | metadata type=hosts index=_internal | rex field=host "(?<host>.+)--.+)" | lookup mylookup Name as host OUTPUT Name "IP Address" as IP Classification "Used for" as used_for | fillnull value="No match" | search Classification=Production used_for!=*Citrix* used_for!=*Virtualization* | stats c by host,Name,IP,Classification,used_for | fields - c How can I show both matched and unmatched values?
I have logs (Azure logs) that have two time fields, StartTime and ExpirationTime. Example: index=azure sourcetype=my_sourcetype | table StartTime ExpirationTime role user I want to take the user... See more...
I have logs (Azure logs) that have two time fields, StartTime and ExpirationTime. Example: index=azure sourcetype=my_sourcetype | table StartTime ExpirationTime role user I want to take the user and see if the user had a failed login attempt in another index / sourcetype between the two time fields StartTime and ExpirationTime. Any help would be greatly appreciated.
I have a subsearch, and am trying to use the value of a field I extracted in an inner search, to check if that value exists anywhere in _raw for the results of my outer search. Current search: inde... See more...
I have a subsearch, and am trying to use the value of a field I extracted in an inner search, to check if that value exists anywhere in _raw for the results of my outer search. Current search: index=my_index  | append      [ searchindex=my_index  "RecievedFileID"     | rex field=_raw  "RecievedFileID\s(?<file_id>\w*)"      | fields file_id ]  | search file_id   I can confirm the regex is working, but cant figure out how to check _raw for any presence of the value of file_id. The logic I'm looking for on the last line is essentially | where _raw contains the value of file_id   Any assistance is appreciated.
Register here and ask questions below this thread for the Office Hours session on Dashboards & Dashboard Studio on Wed, April 19, 2023 at 1pm PT / 4pm ET.   Join our bi-weekly Office Hour series wh... See more...
Register here and ask questions below this thread for the Office Hours session on Dashboards & Dashboard Studio on Wed, April 19, 2023 at 1pm PT / 4pm ET.   Join our bi-weekly Office Hour series where technical Splunk experts answer questions and provide how-to guidance on a different topic every month! This is your opportunity to ask questions related to your specific challenge or use case. This Office Hours session will cover anything from getting started with Dashboard Studio to advanced visualizations and how to migrate your dashboards from Classic to Dashboard Studio.   Please submit your questions below as comments in advance. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions with upvotes will be prioritized. After that, we will go in order of the questions posted below, then will open the floor up to live Q&A with meeting participants. If there’s a quick answer available, we’ll post as a direct reply.   Look forward to connecting!
Join this SPECIAL Office Hours session for Dashboard Studio Challenge participants. Submit your questions below for the session on Wed, April 5, 2023 at 1pm PT / 4pm ET.   Register here to join the... See more...
Join this SPECIAL Office Hours session for Dashboard Studio Challenge participants. Submit your questions below for the session on Wed, April 5, 2023 at 1pm PT / 4pm ET.   Register here to join the zoom session. This is your chance to ask the Dashboard Studio Team questions and get live, hands-on help with your dashboard before your final submission.    The Dashboard Studio Challenge is an opportunity to level up your dashboard skills, showcase your visualizations, and win a $100 gift card to the Splunk Store.    Please submit your questions below as comments in advance. You can also head to the #office-hours user Slack channel to ask questions (request access here).  Pre-submitted questions with upvotes will be prioritized. After that, we will go in order of the questions posted below, then will open the floor up to live Q&A with meeting participants. If there’s a quick answer available, we’ll post as a direct reply.   Look forward to connecting!
Hi guys! I need a help with a time problem. So  my structure is the following: i have many agent installed  on Windows machine that collects some data , i have a heavy forwarder thant handles the ... See more...
Hi guys! I need a help with a time problem. So  my structure is the following: i have many agent installed  on Windows machine that collects some data , i have a heavy forwarder thant handles the universals and forward data to an enterprise instances.  My issue is that one of the server where the universal is installed has a time different from the other machine and from the heavy forwarder, in particular -1h .  So when i use search and alerts in real_time or in 5/10 minutes range i miss all the events related to that machine. I would like all events to take as _time the system time of the enterprise instances (index_time)  or at least the heavy forwader system time.  I tried to change the props.conf and inserting date_config = current at every level but nothing change. It's ok also to have a custom configuration that add + 1h to that specif host  as long as the _time field is alligned with other machine.   Some assumption: All the machine are in the same country, the particular machine has different clock setting and can't be changed. The event that generates the universal contain always a timestamp of the specif machine but we don't want it as _time. 
I'm looking at a very large set of data that separates transactions by product. I've performed some relatively straightforward stats commands on the aggregate data set, but now I'd like to extract a ... See more...
I'm looking at a very large set of data that separates transactions by product. I've performed some relatively straightforward stats commands on the aggregate data set, but now I'd like to extract a similar set of data as it relates to unique accounts.  For example, I want to look at stats related to products, but across unique accounts rather than accounts as a total to give insight into how specific accounts behave. For the purposes of this, let y be product and x be accountHash.  In a splunk query, I could extract the distinct account numbers from the data set by product doing the following:  index=index source=source sourcetype=sourcetype product=productValue  | fields product, accountHash | lookup productCode AS product OUTPUT productDescription | stats  DC(accountHash) as uniqueAccounts by productDescription What if I wanted to look at say, stats count as Volume, avg(transactionValue), etc. across unique accounts? Can I then aggregate the total by productDescription? I know that I could do something like this: index=index source=source sourcetype=sourcetype product=productValue  | fields product, accountHash | lookup productCode AS product OUTPUT productDescription | stats  count as Volume, avg(transactionValue) as avgTranValue by accountHash, product But this would give me a dataset with too many rows to be meaningful. Is it possible to create statistics by unique accountHash values, and then tie those to a product? I don't need to see the individual accountValues, but I'd like to compare statistics across the aggregate total, which would likely skew the statistics towards accounts that use their accounts the most.  Could I do something like  | stats by accountHash And then another stats command that gives me product results across distinct accounts? If the question isn't clear, let me know and I will try to rephrase.  
Register here and ask questions below this thread for the Community Office Hours session on Getting Data In (GDI) to Splunk Platform on Wed, March 29, 2023 at 1pm PT / 4pm ET.   This is your opport... See more...
Register here and ask questions below this thread for the Community Office Hours session on Getting Data In (GDI) to Splunk Platform on Wed, March 29, 2023 at 1pm PT / 4pm ET.   This is your opportunity to ask technical Splunk experts questions related to your specific GDI challenge or use case, like how to onboard common data sources (AWS, Azure, Windows, *nix, etc.), using forwarders, apps to get data in, Data Manager (Splunk Cloud Platform), ingest actions, archiving your data, and anything else you’d like to learn!   There are two 30-minute sessions in this series. You can choose to attend one or both (each session will cover a different set of questions): Wednesday, March 15th – 1:00 pm PT / 4:00 pm ET Wednesday, March 29th – 1:00 pm PT / 4:00 pm ET   Please submit your questions below as comments in advance. You can also head to the #office-hours user Slack channel to ask questions (request access here). Pre-submitted questions (with upvotes) will be prioritized. After that, we will go in order of the questions posted below, then will open the floor up to live Q&A with meeting participants. If there’s a quick answer available, we’ll post as a direct reply.   Look forward to connecting!