All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, trying to get stats of user search stats. I'm struggling trying to workaround the 10K limit with distinct , stats dc(sids) in the below query.    ("data.search_props.type"!=other "data.search_p... See more...
Hi, trying to get stats of user search stats. I'm struggling trying to workaround the 10K limit with distinct , stats dc(sids) in the below query.    ("data.search_props.type"!=other "data.search_props.user"!=splunk-system-user AND "data.search_props.user"!=admin data.search_props.sid::* host=* index=_introspection sourcetype=splunk_resource_usage) | eval mem_used='data.mem_used', app='data.search_props.app', elapsed='data.elapsed', label='data.search_props.label', intro_type='data.search_props.type', mode='data.search_props.mode', user='data.search_props.user', cpuperc='data.pct_cpu', search_head='data.search_props.search_head', read_mb='data.read_mb', provenance='data.search_props.provenance', label=coalesce(label,provenance), sid='data.search_props.sid' | rex field=sid "^remote_[^_]+_(?P<sid>.*)" | eval sid=(("'" . sid) . "'"), search_id_local=replace('data.search_props.sid',"^remote_[^_]+",""), from=null(), username=null(), searchname2=null(), searchname=null() | rex field=search_id_local "(_rt)?(_?subsearch)*_?(?P<from>[^_]+)((_(?P<base64username>[^_]+))|(__(?P<username>[^_]+)))((__(?P<app>[^_]+)__(?P<searchname2>[^_]+))|(_(?P<base64appname>[^_]+)__(?P<searchname>[^_]+)))" | rex field=search_id_local "^_?(?P<from>SummaryDirector)" | fillnull from value="adhoc" | eval searchname=coalesce(searchname,searchname2), type=case((from == "scheduler"),"scheduled",(from == "SummaryDirector"),"acceleration",isnotnull(searchname),"dashboard",true(),"ad-hoc"), type=case((intro_type == "ad-hoc"),if((type == "dashboard"),"dashboard",intro_type),true(),intro_type) | fillnull label value="unknown" | stats max(elapsed) as runtime max(mem_used) as mem_used, sum(cpuperc) AS totalCPU, avg(cpuperc) AS avgCPU, max(read_mb) AS read_mb, values(sid) AS sids by type, mode, app, user, label, host, search_head, data.pid | eval type=replace(type," ","-"), search_head_cluster="default" | stats dc(sids) AS search_count, sum(totalCPU) AS total_cpu, sum(mem_used) AS total_mem_used, max(runtime) AS max_runtime, avg(runtime) AS avg_runtime, avg(avgCPU) AS avgcpu_per_indexer, sum(read_mb) AS read_mb, values(app) AS app by type, user | eval prefix="user_stats.introspection." | addinfo | rename info_max_time as _time | fields - "info_*"   Can someone suggest a tweak in the SPL to get around distinct 10K limit? Thank you,   Chris
Getting error : "The lookup table 'Horizon_Feb_2022.csv' requires a .csv or KV store lookup definition." while running simple query "|inputlookup Horizon_Feb_2022.csv". Facing similar issue for othe... See more...
Getting error : "The lookup table 'Horizon_Feb_2022.csv' requires a .csv or KV store lookup definition." while running simple query "|inputlookup Horizon_Feb_2022.csv". Facing similar issue for other datasets as well.
Hi,   When trying to config the inputs via the GUI I keeps getting the following error after adding all the information (press Next): Encountered the following error while trying to save: Splun... See more...
Hi,   When trying to config the inputs via the GUI I keeps getting the following error after adding all the information (press Next): Encountered the following error while trying to save: Splunkd daemon is not responding: ('Error connecting to /servicesNS/admin/TA-zenoss/data/inputs/zenoss_events: The read operation timed out',)   While I'm running on a VM, I just added more vCores and Memory - but still gets the same error. It's the current newest version 2.1.0 of the TA and Splunk version 8.2.6 on Ubuntu 18.04LTS.   Any help? Thanks //T @epescio_splunk @shaskell_splunk 
Hi guys, Does anyone have the same problem where downstream tiers in a business transaction don't report their errors to a business transactions error metrics? For instance, a business transact... See more...
Hi guys, Does anyone have the same problem where downstream tiers in a business transaction don't report their errors to a business transactions error metrics? For instance, a business transaction doesn't error on the first tier, but on a downstream tier connected by JMS it does. My expectation is an error should be reported in errors per minute for that business transaction. It does not. The errors are visible in the 'errors' tab however for this business transaction, but without error reporting in the metrics I am unable to alert off these errors. Interested to see if anyone has come across this problem or has a solution. Kind Regards, James
Hey. I have a dataset as follows: I have a 2nd dataset as follows::   I need to perform a lookup between both both I am getting 3 empty columns after the lookup from the 2... See more...
Hey. I have a dataset as follows: I have a 2nd dataset as follows::   I need to perform a lookup between both both I am getting 3 empty columns after the lookup from the 2nd dataset: Can you please help?   Here is the query I am using: |inputlookup ABC | eval AR_ID=_key | lookup XYZ AR_ID as _key OUTPUT NodeName as SW_NodeName, FQDN as SW_FQDN, Product as SW_App Thanks,
Hi I have table like this: name    color           status jack        red               fail jack        blue             fail daniel    green         pass   expected output: name      ... See more...
Hi I have table like this: name    color           status jack        red               fail jack        blue             fail daniel    green         pass   expected output: name                     color               status jack(2)                 red(1)               fail(2) daniel(1)           blue(1)             pass(1)                                 green(1)   any idea? Thanks,
Hi, I am working on a way to find an orphaned asset based on asset inventory I have in a lookup, which looks something like this: assetname, owner, os asset-01-abc, bob, win10 asset-03-abc, bob,... See more...
Hi, I am working on a way to find an orphaned asset based on asset inventory I have in a lookup, which looks something like this: assetname, owner, os asset-01-abc, bob, win10 asset-03-abc, bob, win10 and I was able to find out in a different search that there exists asset "asset-02-abc" - an orphan - which is not present in the lookup. So far I managed to transform the orphan name to "asset-*-abc", and I would like to create a search that would map that asset to the lookup and output something like this: siblingassets, orphanedasset, owner asset-01-abc asset-03-abc, asset-02-abc, bob I tried the following:     [...search which created the orphaned asset list...] |table orphaned_asset_originalname orphaned_asset_wildcarded | map [|inputlookup assets | search assetname="$orphaned_asset_wildcarded$" | table assetname, "$orphaned_asset_full_name$" owner]     But it doesn't output correctly, and I also tried using lookup but it simply didnt output anything Thank you
we are using TA-MS_O365_Reporting and as part of the change Microsoft Change the TA should connect with token and not with user  in this TA I don't see such option   what is the alternative  ? 
I am getting the same error, after logging in to my SaaS trial account.  Is the fix the same, can anybody in AppDynamics check? HTTP Status 400 - Error while processing SAML Authentication Respons... See more...
I am getting the same error, after logging in to my SaaS trial account.  Is the fix the same, can anybody in AppDynamics check? HTTP Status 400 - Error while processing SAML Authentication Response - see server log for details type Status report messageError while processing SAML Authentication Response - see server log for details descriptionThe request sent by the client was syntactically incorrect.
Hi Splunkers, I need to make a statistical table to show me the hosts and each sourcetype that it generates and the count for each sourcetype with a column that calculates the total count and most ... See more...
Hi Splunkers, I need to make a statistical table to show me the hosts and each sourcetype that it generates and the count for each sourcetype with a column that calculates the total count and most importantly a column with a sample event from each sourcetype. I want it to be something like the attached table: Can someone please help me with the search that provides me with such a table? I have tried to make such a table using the following command (without the raw log): | tstats values(sourcetype) count where index=* by host | sort - count but the above search counts only the total of all the sourcetypes Then I have tried a different search: index=* | chart count OVER host BY sourcetype useother=false limit=0 but again this is not an accurate search for what I want.     Much Thanks  Murad Ghazzawi  
Hi, I am trying to find a way to replace numbers in strings with an asterisk, if they are concatenated with one, and if not then also with one, using rex field, example: AA-1234-12-A BB-1-132-B-1... See more...
Hi, I am trying to find a way to replace numbers in strings with an asterisk, if they are concatenated with one, and if not then also with one, using rex field, example: AA-1234-12-A BB-1-132-B-1 56-CC-1-345 to be replaced with: AA-*-*-A BB-*-*-B-* *-CC-*-* I tried multiple sed commands from the internet but they either don't work properly in splunk or do not solve my issue in the exact. Many thanks
Dears,   Is there a way to send the dashboard results by use CSV file rather than PDF?   Regards
I have 2 partitions on my centos. The first one is 20 GB and mounted on / and the second one is 300 GB and mounted on /opt. Splunk service is in /opt/splunk directory  and i have 300 free on this par... See more...
I have 2 partitions on my centos. The first one is 20 GB and mounted on / and the second one is 300 GB and mounted on /opt. Splunk service is in /opt/splunk directory  and i have 300 free on this partition. but i got disk space warning on / partition.  please help!
Hello everyone, In Splunk GUI when i run health check its showing one error like One or more source types has been found to present events in the future. All the sources are giving the correct time... See more...
Hello everyone, In Splunk GUI when i run health check its showing one error like One or more source types has been found to present events in the future. All the sources are giving the correct timestamp with timezone UTC +0:00 but when i checked the devices that are configured with the source types with the error, the devices are in the other timezone i.e UTC +08:00, and we are receiving the logs that are also in the future timezone. so how can i overcome this problem with the future timestamp. the Splunk indexer time zone is UTC +0:00 please refer the screenshot. Thanks in advance............
How does Splunk calculate Time to Triage, what data does it use? e.g. time an event occurred and time the event was put modified or put in pending etc.?
Hi All, I'm trying to find the credit card details in the logs with all in one regex expression. But I was also getting some other data too like timestamp data as it has more than 12digits and some r... See more...
Hi All, I'm trying to find the credit card details in the logs with all in one regex expression. But I was also getting some other data too like timestamp data as it has more than 12digits and some random data. Just bit exhausted with this thing here. Is there any possible solution to find the credit card numbers directly that will not contains random numbers or time stamps. Help me with the query if possible. Thanks in advance.
I have the stores and I want to check the status of store whether it is up or down  i want to show the status with help of  processes   Processes.csv lookup  processes Services DeviceType ... See more...
I have the stores and I want to check the status of store whether it is up or down  i want to show the status with help of  processes   Processes.csv lookup  processes Services DeviceType ax Amazonx controller by buy register  I wrote a query but it is not showing the status up or down |mstats latest_time(value) as _time where (host="*" OR host="t*") index=a_store_metrics And metric_name="process.time" by host process |search process in ("ax","by") |eval host=lower(host) |rex field=host "(?<Device>["\.]+)" |rex field=Device "(?<store>\w{7})" |search [|inputlookup  store_device where store="a01" |field Device |format] |lookup store_device Device OUTPUT Store as storetype DeviceType |where (DeviceType="Controller" OR  DeviceType="Register") AND store="a01" |lookup process.csv  process OUTPUT Services |stats latest(_time) as time by instance store |eval status=if(time!="".,"UP","DOWN") |fields store instance service status  I am getting output store instance service status a01 ax amazon x UP a01 by buy UP   If i off the store it is not showing down it is showing only one instance suppose if I stop the services for by it should show status down in by column but it is not showing entire column as shown below. store instance service status a01 ax amazon x UP         Please help me  out Thank you                                                                          
Please help in suggesting a best way to ingest splunk search results to influxdb. Step by step guide would be appreciated.  
Hi all, I need to create an alert to check a folder has 10 files that are created daily. The tricky bit is the folder name is based on the date. the complete path is  \\TABASIPP\Prod_Data\appr\... See more...
Hi all, I need to create an alert to check a folder has 10 files that are created daily. The tricky bit is the folder name is based on the date. the complete path is  \\TABASIPP\Prod_Data\appr\data\<today's date> and in there, we need to check that 10 files are created by 4pm. the 10 files are  20220530Report1.csv 20220530Report2.csv Fails_30May2022_checked.csv 20220530_total_submissions.csv 20220530_loss_report.csv 30May2022_EOD.csv etc... note that the file names all have the current date, as well as the folder has the date in it as well. Need to check that the files are created by 4pm, and if not, send us an email alert. Is this possible with splunk and how would you do this? thanks for any help in advannce.  
The table's "previous" button works except when I'm going from Page 2 to Page 1 of the results. When I clicked "previous" the results stayed the same. The table's source is as follows: source=<<s... See more...
The table's "previous" button works except when I'm going from Page 2 to Page 1 of the results. When I clicked "previous" the results stayed the same. The table's source is as follows: source=<<source>> | stats count by strategy_name | sort -num(count) | table strategy_name, count | rename strategy_name as "Alert Type", count as Count