All Topics

Top

All Topics

Hi There,     These results are for a particular serial number, we do have many results like this for several serial number,     Row 1 are the results from index    Row 2 are the results from lo... See more...
Hi There,     These results are for a particular serial number, we do have many results like this for several serial number,     Row 1 are the results from index    Row 2 are the results from lookup file, The objective is remove the unwanted data, When the src_name consists of "aap", then its unwanted data, by removing app in src_name field in row1(which is result of index), the same src_name should not be considered, which in the sense, these both records having same src_name of a particular number, these both recors should be excluded in results. Eg. Input item email_new item_model in_inventory is_apple src_name src_name_concat serial_number 5cg01233 hello@company.com HP 1 0 aap-5cg01233 aap-5cg01233 5cg01233 s102910 hai@company.com 1 0 5cg0233 5cg01233 5cg1435 yess@company.com Dell 1 0 5cg1435 5cg1435 5cg1435 s109525 no@company.com   1 0 5cg1435 5cg1435   Output  -since only row is not consists of aap, so considering item email_new item_model in_inventory is_apple src_name src_name_concat serial_number 5cg1435 yess@company.com Dell 1 0 5cg1435 5cg1435 5cg1435
I am working on a Splunk cloud instance  https://prd-p-f34a1.splunkcloud.com/en-US/app/launcher/home and I want to rise a support case to access REST APIs port 8089 but its showing "It doesn't look... See more...
I am working on a Splunk cloud instance  https://prd-p-f34a1.splunkcloud.com/en-US/app/launcher/home and I want to rise a support case to access REST APIs port 8089 but its showing "It doesn't look like you have any active cloud stacks. If you believe this is an error, please contact Support via telephone using the region-specific numbers found here." while raising a support case.
Hi experts, I was stuck in a quandary when I was trying to see which of my customer base was using optimization mode and I needed to get the percentage of optimization patterns used for each org sor... See more...
Hi experts, I was stuck in a quandary when I was trying to see which of my customer base was using optimization mode and I needed to get the percentage of optimization patterns used for each org sorted by orgId, so I tried using the following statement. index=* type=* orgId=* | eval Mode = case(type ==" non_opt", "None-Optimized", type=="opt", "Optimized") | stats count by Mode, orgId | sort count | stats list(Mode), list(count) by orgId But so far I only got the number of opt/non-opt users sorted by orgId, actually I want to calculate the value or percentage of opt/(opt + non-opt) and output the result grouped by orgId. How should I do?...
hi, we are doing splunk upgrade to version9.0.2,,, after upgrading the instance, the tabs functionality is not working..   Can anyone help?
Hello Splunk enjoyers! I loaded some data(10 000 000), with fields: updated_time, info, user and discription,  to my new index "data_tmp". So when i search, i got a problem  Error in 'IndexScoped... See more...
Hello Splunk enjoyers! I loaded some data(10 000 000), with fields: updated_time, info, user and discription,  to my new index "data_tmp". So when i search, i got a problem  Error in 'IndexScopedSearch': The search failed. More than 1000000 events found at time 1677582000. My search: So i tried to extract by updated_time like: index = data_tmp  eval _time = strftime(updated,"%Y-%m-%d %H:%M:%S.%3N") | convert ctime(_time) | fieldformat _time = strftime(updated,"%Y-%m-%d %H:%M:%S.%3N") but nothing works. Can somebody help me with that? thank you!
index=mail | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | table subject sender values(recipient) values(RecipientDomain) Count ... See more...
index=mail | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | table subject sender values(recipient) values(RecipientDomain) Count values(size)``` | stats values(recipient) values(subject) count by RecipientDomain sender | sort -count   i have this search running daily. based on the results from the search,  i want to compare the sender field result with another csv file call 123.csv in lookup , there is a field call Email Address in this csv, give me the results if there is a match.   Please help. Thank you.
I'm using DB Connect to input some data from Oracle. I have Splunk installed on a Windows 2016 Server. I cannot seem to get any of my sourcetypes read or used with an input created via DB Connect. No... See more...
I'm using DB Connect to input some data from Oracle. I have Splunk installed on a Windows 2016 Server. I cannot seem to get any of my sourcetypes read or used with an input created via DB Connect. No matter what I do, if I run a search from the "Find Events" button of the DB Connect application, then click on "+Extract New Fields", it returns an error: "The events associated with this job have no sourcetype information: ". Every. Single. Time. Sample query:   index=thejoy sourcetype=WHYNOT source=JUSTWORKSERIOUSLY OR source=mi_input://JUSTWORKSERIOUSLY     Interestingly enough, if I run the following query I get the EXACT same results:   index=thejoy source=JUSTWORKSERIOUSLY OR source=mi_input://JUSTWORKSERIOUSLY     I get data back from both of these results, but am unable to extract new fields. I have tried doing the following: *Creating a new Index and assigning it to splunk_app_db_connect *Creating a new Index and assigning it to search *Creating a new SourceType via Settings > SourceTypes and setting it to Searching & Reporting *Creating a new SourceType via Settings > SourceTypes and setting it to Splunk DB Connect *Specified the Application for the Data Input to be DB Connect *Specified the Application for the Data Input to be the Splunk Search *Changing Permissions on DB Connect to allow Everyone to Read/Write *Creating a new user and doing all of the above *In DB Connect, typing a new value for SourceType that doesn't exist so it gets created automatically No matter what I try, I just seem to get the same message about no sourcetype information being available for the job. If I create a new source type via Settings > SourceTypes in the main splunk menu, it doesn't show up in the list for DB Connect (which I understand is a bug). This changes very little, since it's apparently not being used anyways. If I let DB Connect create a new sourcetype, I do not see it appear in the Settings > SourceTypes menu after the DB Input is created and a successful search is executed. Also, when I check the props.conf file located in: C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\local\ my sourcetype is not present at all in the file; it's just an empty file. I'm just simply at a loss here on why this is happening and what to do. I just want my DB Inputs to recognize my sourcetypes. Ultimately, I want to parse my data as it is going into Splunk using a specific source type. I'm at the point where I'm considering doing a fresh install of Splunk. Any help on this extremely frustrating issue would be greatly appreciated.
hello im confusing about indexingqueue,    if fill_perc >= 99.0 !!! its blocked queue right ?  and if fill_perc >= 1 and <=99 !!! its meaning slowly queue ? if slowly queue impact is to slo... See more...
hello im confusing about indexingqueue,    if fill_perc >= 99.0 !!! its blocked queue right ?  and if fill_perc >= 1 and <=99 !!! its meaning slowly queue ? if slowly queue impact is to slowly search on user ?
How to extract the following user and move it to a field in Splunk? message: xad="/home/andy" message: xad="/home/george" message: xad="/home/cindy" and a lot more.. I would like to get an outpu... See more...
How to extract the following user and move it to a field in Splunk? message: xad="/home/andy" message: xad="/home/george" message: xad="/home/cindy" and a lot more.. I would like to get an output as follows.    Because of the quote " before /home, Splunk rejected my regex. Please help. Thanks user   ====     andy george cindy
Hi,   We have Splunk agent running as a docker container.  We earlier did inputs.conf and props.conf on the splunk container and were ingesting csv file. Everything works smoothly.  But due to som... See more...
Hi,   We have Splunk agent running as a docker container.  We earlier did inputs.conf and props.conf on the splunk container and were ingesting csv file. Everything works smoothly.  But due to some reasons, we will not be able to make those changes to inputs.conf and props.conf on the splunk container.  So we added the below config in AWS cloudformation. Our CSV files contains a fixed HEADER as line #1. But unfortunately Splunk ingests the file for the 1st time only and it doesn't ingests subsequent files which contains the same header. Any help on this would be great. Thanks   splunk:                 monitors:                   - index: "myindex"                     file: "/path/abc-*.csv"                     sourcetype: "mysourcetype"                     crcSalt: "<SOURCE>"                     DATETIME_CONFIG: "CURRENT"                     INDEXED_EXTRACTIONS: "csv"                     KV_MODE: "none"                     HEADER_FIELD_LINE_NUMBER: "1"                     FIELD_DELIMITER: ","                     SHOULD_LINEMERGE: "false"
I have data dated "2-14-2022".  When I insert the data into Splunk today, the _time becomes "3-2-2023". How can I overwrite _time to preserve the value as "2-14-2022" even though I insert the dat... See more...
I have data dated "2-14-2022".  When I insert the data into Splunk today, the _time becomes "3-2-2023". How can I overwrite _time to preserve the value as "2-14-2022" even though I insert the data at "3-2-2023", without creating an additional field to store the snapshot of datetime? The reason I would like to do that is because (1) I can leverage on the Splunk Time Range selector to limit my search query, instead of creating the time range selector myself. (2) I observed the query turnaround time is faster if we limit the data by earliest=1677686400 latest=1677980130  (2.2 seconds query turnaround time) than using start_date & end_date that are associated to the custom input fields that I created (*194 seconds query turnaround time).        
Hi All! Had a look around but couldn't find an answer to this. I'm trying to do a search where I track a users log in journey leading to a specific failed attempt error. The logging system doubles u... See more...
Hi All! Had a look around but couldn't find an answer to this. I'm trying to do a search where I track a users log in journey leading to a specific failed attempt error. The logging system doubles up on events so i'm only looking for values that happen at different times, and remove the duplicates that show as occurring at the exact same time. However, my search keeps showing all the events and ignoring the dedup in my search and I cannot for the life of me figure out why. Example of search below: index=INDEX sourcetype=SOURCETYPE <Search Phrase> | eval LockoutTime=strftime(_time,"%Y-%m-%d %H:%M:%S %Z") | transaction USERID maxspan=30M mvlist=true endswith=(EventDescription=EVENT) | table LockoutTime USERID EventDescription Message EventCode Result | dedup 1 LockoutTime | where mvcount(EventCode)>1 Any help would be greatly appreciated.
For example the Infosec app if looks like first picture in my deployer but looks like the second example on my SH what gives?    
Hello to all I would like to know the default time set for hot, warm, cold and frozen buckets. I also want to know what the retention policy is. When I go to "Settings" -> "Monitoring Console" ->... See more...
Hello to all I would like to know the default time set for hot, warm, cold and frozen buckets. I also want to know what the retention policy is. When I go to "Settings" -> "Monitoring Console" -> "Indexing" -> "Indexes and Volumes" -> "Index Detail: Instance" I find the following retention policy When I enter the path $SPLUNK_HOME/etc/system/local/default/web.conf I can see some information about the buckets thanks if someone can solve my question      
Community,   Looking for some assistance on "serverclass.conf" file and the ability to utilize a whitelist regex pattern matching such that we can target specific devices in our network. We are... See more...
Community,   Looking for some assistance on "serverclass.conf" file and the ability to utilize a whitelist regex pattern matching such that we can target specific devices in our network. We are seeking to include only devices with this naming schema: T-<some string> Separately, we want to match on only devices with another naming schema: L-<some string> We are pushing different configurations to each of those devices (hence the need for separation). What we started with in each case is using a whitelist of: L-* AND T-* This all works fine....until....we found that we have devices in our environment with the naming schema of: T-<some string>L-<some string> We attempted to leverage some regex matching, but believe our syntax to be wrong as the respective app and its configuration are no longer being deployed to the system(s) to be managed. Looking for some assistance on how to properly write regex matching for devices where is only matches on the first instance of a single letter followed by a dash as this does not seem to be well documented. Thank you in advance.  
Is there a license report called: license report: current month license data with peak and avg. If so, where is it located?  If not, how could I create it?
Hello Splunkers , I have the following search which gives me the the dashboard look as table...but can  we make this as a column or bar chart where each bar is a SN and when hover over shows the du... See more...
Hello Splunkers , I have the following search which gives me the the dashboard look as table...but can  we make this as a column or bar chart where each bar is a SN and when hover over shows the duration      index=abc | stats earliest(_time) as etime latest(_time) as ltime by SN | eval duration=ltime - etime | eval time_duration=tostring(duration, "duration") | fields SN time_duration       Below is the sample events 2023-03-01T11:14:41.094095-08:00 hostabc log-inventory.sh[22269]: GPU7: PCISLOT: xx.yyy, MODEL: Graphics Device, PN: 2vvv1, BOARDPN: vvv, SN: 155552 2022-03-01T11:14:41.094095-08:00 hostabc log-inventory.sh[22269]: GPU7: PCISLOT: xx.yyy, MODEL: Graphics Device, PN: 2vvv1, BOARDPN: vvv, SN: 155552, Thanks in Advance
I have logs like below:     { [-] TransactionName: "my TransactionName" type1Error: NA eventTime: 2023-02-28 11:16:52.961 type2Error: NA type3Error: NA } { [-] TransactionNam... See more...
I have logs like below:     { [-] TransactionName: "my TransactionName" type1Error: NA eventTime: 2023-02-28 11:16:52.961 type2Error: NA type3Error: NA } { [-] TransactionName: "my TransactionName" type1Error: NA eventTime: 2023-02-28 11:16:52.961 type2Error: Missing Field type3Error: NA }     I have framed a below query:     index=my_idx | stats count by type1Error, type2Error, type3Error     Which gives me result like:     --------------------------------------------------------- type1Error type2Error type3Error count --------------------------------------------------------- NA NA NA 1 NA NA Missing Field 1 ---------------------------------------------------------     But then, it would be better if I can bring it for success and failures separately. Like: Create 2 new queries for errors with NA and not NA: NA:     ---------------------------------------------------- Success count ---------------------------------------------------- type1Error 2 type2Error 2 type3Error 1 ----------------------------------------------------     Not NA:     ---------------------------------------------------- Failure count ---------------------------------------------------- type1Error 0 type2Error 0 type3Error 1 ----------------------------------------------------     How can we achieve this? Not getting a clear picture on how to frame a query for this. Tried using chart, but no luck !!
First I am trying to find this Splunk Whitepaper on MSaaS which is currently a dead link on this page describing MSaaS conceptual Architecture. I would also appreciate ideas and architectures for s... See more...
First I am trying to find this Splunk Whitepaper on MSaaS which is currently a dead link on this page describing MSaaS conceptual Architecture. I would also appreciate ideas and architectures for supporting a Splunk platform that essentially is multi tenant.  Thoughts on multiple instances vs creating and separating by index are good too.  I'm just trying to identify methods, and weigh options at this point. Thanks  
Scheduled a PDF Delivery of a custom dashboard, but I can't seem to get the chart in the dashboard to fill the width of the page. I fiddled around with the Print button and that generates a PDF wher... See more...
Scheduled a PDF Delivery of a custom dashboard, but I can't seem to get the chart in the dashboard to fill the width of the page. I fiddled around with the Print button and that generates a PDF where the charts fill the width of the page but automating that might be a bit of a pain. Any ideas? Splunk version: 8.1.6