All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am running Splunk Enterprise and am trying to create a dashboard panel "Events" search string that pulls multiple Windows Event Log Codes. I am using variations of the code below: index=windows* ... See more...
I am running Splunk Enterprise and am trying to create a dashboard panel "Events" search string that pulls multiple Windows Event Log Codes. I am using variations of the code below: index=windows* sourcetype="WinEventLog:Security" (EventCode>="630" AND EventCode<="640") OR EventCode="641" OR (EventCode>="647" AND EventCode<="668") OR (EventCode>="4726" AND EventCode<="4736") OR EventCode="4737" OR (EventCode>="4743" AND EventCode<="4763") OR EventCode="4764" OR (EventCode>="4782" AND EventCode<="4793") I also tried this search to no avail: sourcetype=wineventlog source="WinEventLog:Security" host="xxxx*" EventCode=4625,4624 When I used the second code without the (,4624) it will populate events with 4625 but I have not figured out how to make it pull more than one Event Code properly. It doesn't populate any errors or text failures. It simply presents "no results found. Try expanding the time range" which I went from 15 mins up to YTD. Does Anyone have a Windows Event Search command bank they could share or tell me what to read/explain how to correct my line of code? Thanks! P.S. the (host=XXXX*) is used as a place holder for my organizations host name
Hi, we have one syslog input where we receive log data from two different sources. One runs on local time, i.e. CEST, and carries a distinct string "abc", while the other runs on UTC and carries ... See more...
Hi, we have one syslog input where we receive log data from two different sources. One runs on local time, i.e. CEST, and carries a distinct string "abc", while the other runs on UTC and carries "def". For some unknown reason the UTC one doesn't carry "UTC" or "+00:00" with it, that information got stripped in transfer. Therefore it is currently off by two hours. To fix that, I want to pass the "abc" through unchanged, and set "UTC" on the "def", so that it will be correctly displayed at search time. My experiments with props.conf and transforms.conf (and datetime.xml) were not successful, since once the timezone is set at input time, it seems impossible to change it selectively for "def". Transforming the sourcetype is easy, but then it is too late, and the same applies to the host, so setting the TZ depending on a transformed parameter is not an option. Any ideas, apart from a conversation with the people who send the broken data? Thanks in advance Volkmar
I have the following line in my splunk_metadata.csv to forward forcepoint proxy logs to the index called proxy_forcepoint. This worked when running the latest 1.x release. Post upgrade, some of the... See more...
I have the following line in my splunk_metadata.csv to forward forcepoint proxy logs to the index called proxy_forcepoint. This worked when running the latest 1.x release. Post upgrade, some of the events still go into the index above (these have the sc4s_vendor_product field set to forcepoint), whereas other events are delivered to the lastchanceindex (these to not have a field sc4s_vendor_product) Looking in app-syslog-forcepoint_webprotect.conf (from the source from 2.29 source), Forcepoint messages are recognised by "vendor=Forcepoint" (which all messages have), and if Product is "Security" (which all messages have) - then the rewrite rule should set "product("webprotect")".    So I cannot see what is obviously wrong in the configuration or events, or how to investigate the events to set the line in splunk_metadata.csv  appropriately to get the routing to happen as I wish   All help appreciated
I am encountering internal server errors when clicking on the open in search magnifying glass.  These are large queries (approximately 5.5K characters in the request).   Some details:  Splunk ver... See more...
I am encountering internal server errors when clicking on the open in search magnifying glass.  These are large queries (approximately 5.5K characters in the request).   Some details:  Splunk version is 8.2.2.1  Single search head with 2 indexers  Splunk is on-prem behind an Apache reverse proxy I tried setting the LimitRequestLine and LimitRequestFieldSize to larger values in the Apache config with no success. The Apache access_logs report the 500 error but no other really useful information.
HI, I am trying to recreate the same structure in Splunk which was created in excel. I have five fields week, total transactions, codes, count of codes and code percentage. Sample data are shown ... See more...
HI, I am trying to recreate the same structure in Splunk which was created in excel. I have five fields week, total transactions, codes, count of codes and code percentage. Sample data are shown below the first two column in blue should be represented in row wise and next three column in orange should be represented in column wise in Splunk table.   I want to display the week and total transaction in row wise followed by column codes showing the count and percentage of it in column wise for each code like below. Please let me know if it is possible? Thanks
Log Lines are as given below Reports obtained. MyId=NameOne, sId=s0, Reports=true, LogString= url=status.com, Type=base, Available=true, Tag=112434, Token=2356 url=status2.com, Type=error, Availab... See more...
Log Lines are as given below Reports obtained. MyId=NameOne, sId=s0, Reports=true, LogString= url=status.com, Type=base, Available=true, Tag=112434, Token=2356 url=status2.com, Type=error, Available=false, Tag=12345, Token=23567 Reports obtained. MyId=NameTwo, sId=s1, Reports=true, LogString= url=status3.com, Type=base, Available=true, Tag=12345876, Token=2356  I want to create a table as below MyId  sId Reports url Type Available Tag Token NameOne  s0 true status.com base true 112434 2356 NameOne  s0 true status2.com error false 12345 23567 NameTwo  s1 true status3.com base true 12345876 2356  
We have already migrated the KVstore storage engine to WiredTiger, but we still get a message at login as admin reminding us to complete this migration. How do I disable this permanently?
Team,  I have below timechart which is counting http error/success codes for a span of 1hr. Now I need to calculate the percentage increase (or decrease) in each error/success code based on previ... See more...
Team,  I have below timechart which is counting http error/success codes for a span of 1hr. Now I need to calculate the percentage increase (or decrease) in each error/success code based on previous hour. _time 200 4xx errors 5xx errors 2022-05-23 00:00 100 20 30 2022-05-23 01:00 200 30 30 2022-05-23 02:00 250 50 60 2022-05-23 03:00 300 30 50 2022-05-23 04:00 350 40 40 2022-05-23 05:00 400 60 60 2022-05-23 06:00 500 80 80
Hello all, I had a question that I have been trying to figure out how to address within a concise SPL query.  I have two lookups with a field name of X, lookup1 and lookup2 have all these values ... See more...
Hello all, I had a question that I have been trying to figure out how to address within a concise SPL query.  I have two lookups with a field name of X, lookup1 and lookup2 have all these values populated and I am trying to find a query to output the difference in field values for X between the two lookups.  lookup1's X value has a Multi-value field, where as lookup2 is only single values. E.g. Lookup1 has field values: Banana Apple Oranges   Lookup2 has field values: Banana   Expected output from desired query: Apple Oranges   Is there a way to do this between these two lookups with the above in mind? This seems feasible with one lookup, but comparing two lookups has proven to be difficult. Any support would be appreciated!
Hi, trying to get stats of user search stats. I'm struggling trying to workaround the 10K limit with distinct , stats dc(sids) in the below query.    ("data.search_props.type"!=other "data.search_p... See more...
Hi, trying to get stats of user search stats. I'm struggling trying to workaround the 10K limit with distinct , stats dc(sids) in the below query.    ("data.search_props.type"!=other "data.search_props.user"!=splunk-system-user AND "data.search_props.user"!=admin data.search_props.sid::* host=* index=_introspection sourcetype=splunk_resource_usage) | eval mem_used='data.mem_used', app='data.search_props.app', elapsed='data.elapsed', label='data.search_props.label', intro_type='data.search_props.type', mode='data.search_props.mode', user='data.search_props.user', cpuperc='data.pct_cpu', search_head='data.search_props.search_head', read_mb='data.read_mb', provenance='data.search_props.provenance', label=coalesce(label,provenance), sid='data.search_props.sid' | rex field=sid "^remote_[^_]+_(?P<sid>.*)" | eval sid=(("'" . sid) . "'"), search_id_local=replace('data.search_props.sid',"^remote_[^_]+",""), from=null(), username=null(), searchname2=null(), searchname=null() | rex field=search_id_local "(_rt)?(_?subsearch)*_?(?P<from>[^_]+)((_(?P<base64username>[^_]+))|(__(?P<username>[^_]+)))((__(?P<app>[^_]+)__(?P<searchname2>[^_]+))|(_(?P<base64appname>[^_]+)__(?P<searchname>[^_]+)))" | rex field=search_id_local "^_?(?P<from>SummaryDirector)" | fillnull from value="adhoc" | eval searchname=coalesce(searchname,searchname2), type=case((from == "scheduler"),"scheduled",(from == "SummaryDirector"),"acceleration",isnotnull(searchname),"dashboard",true(),"ad-hoc"), type=case((intro_type == "ad-hoc"),if((type == "dashboard"),"dashboard",intro_type),true(),intro_type) | fillnull label value="unknown" | stats max(elapsed) as runtime max(mem_used) as mem_used, sum(cpuperc) AS totalCPU, avg(cpuperc) AS avgCPU, max(read_mb) AS read_mb, values(sid) AS sids by type, mode, app, user, label, host, search_head, data.pid | eval type=replace(type," ","-"), search_head_cluster="default" | stats dc(sids) AS search_count, sum(totalCPU) AS total_cpu, sum(mem_used) AS total_mem_used, max(runtime) AS max_runtime, avg(runtime) AS avg_runtime, avg(avgCPU) AS avgcpu_per_indexer, sum(read_mb) AS read_mb, values(app) AS app by type, user | eval prefix="user_stats.introspection." | addinfo | rename info_max_time as _time | fields - "info_*"   Can someone suggest a tweak in the SPL to get around distinct 10K limit? Thank you,   Chris
Getting error : "The lookup table 'Horizon_Feb_2022.csv' requires a .csv or KV store lookup definition." while running simple query "|inputlookup Horizon_Feb_2022.csv". Facing similar issue for othe... See more...
Getting error : "The lookup table 'Horizon_Feb_2022.csv' requires a .csv or KV store lookup definition." while running simple query "|inputlookup Horizon_Feb_2022.csv". Facing similar issue for other datasets as well.
Hi,   When trying to config the inputs via the GUI I keeps getting the following error after adding all the information (press Next): Encountered the following error while trying to save: Splun... See more...
Hi,   When trying to config the inputs via the GUI I keeps getting the following error after adding all the information (press Next): Encountered the following error while trying to save: Splunkd daemon is not responding: ('Error connecting to /servicesNS/admin/TA-zenoss/data/inputs/zenoss_events: The read operation timed out',)   While I'm running on a VM, I just added more vCores and Memory - but still gets the same error. It's the current newest version 2.1.0 of the TA and Splunk version 8.2.6 on Ubuntu 18.04LTS.   Any help? Thanks //T @epescio_splunk @shaskell_splunk 
Hi guys, Does anyone have the same problem where downstream tiers in a business transaction don't report their errors to a business transactions error metrics? For instance, a business transact... See more...
Hi guys, Does anyone have the same problem where downstream tiers in a business transaction don't report their errors to a business transactions error metrics? For instance, a business transaction doesn't error on the first tier, but on a downstream tier connected by JMS it does. My expectation is an error should be reported in errors per minute for that business transaction. It does not. The errors are visible in the 'errors' tab however for this business transaction, but without error reporting in the metrics I am unable to alert off these errors. Interested to see if anyone has come across this problem or has a solution. Kind Regards, James
Hey. I have a dataset as follows: I have a 2nd dataset as follows::   I need to perform a lookup between both both I am getting 3 empty columns after the lookup from the 2... See more...
Hey. I have a dataset as follows: I have a 2nd dataset as follows::   I need to perform a lookup between both both I am getting 3 empty columns after the lookup from the 2nd dataset: Can you please help?   Here is the query I am using: |inputlookup ABC | eval AR_ID=_key | lookup XYZ AR_ID as _key OUTPUT NodeName as SW_NodeName, FQDN as SW_FQDN, Product as SW_App Thanks,
Hi I have table like this: name    color           status jack        red               fail jack        blue             fail daniel    green         pass   expected output: name      ... See more...
Hi I have table like this: name    color           status jack        red               fail jack        blue             fail daniel    green         pass   expected output: name                     color               status jack(2)                 red(1)               fail(2) daniel(1)           blue(1)             pass(1)                                 green(1)   any idea? Thanks,
Hi, I am working on a way to find an orphaned asset based on asset inventory I have in a lookup, which looks something like this: assetname, owner, os asset-01-abc, bob, win10 asset-03-abc, bob,... See more...
Hi, I am working on a way to find an orphaned asset based on asset inventory I have in a lookup, which looks something like this: assetname, owner, os asset-01-abc, bob, win10 asset-03-abc, bob, win10 and I was able to find out in a different search that there exists asset "asset-02-abc" - an orphan - which is not present in the lookup. So far I managed to transform the orphan name to "asset-*-abc", and I would like to create a search that would map that asset to the lookup and output something like this: siblingassets, orphanedasset, owner asset-01-abc asset-03-abc, asset-02-abc, bob I tried the following:     [...search which created the orphaned asset list...] |table orphaned_asset_originalname orphaned_asset_wildcarded | map [|inputlookup assets | search assetname="$orphaned_asset_wildcarded$" | table assetname, "$orphaned_asset_full_name$" owner]     But it doesn't output correctly, and I also tried using lookup but it simply didnt output anything Thank you
we are using TA-MS_O365_Reporting and as part of the change Microsoft Change the TA should connect with token and not with user  in this TA I don't see such option   what is the alternative  ? 
I am getting the same error, after logging in to my SaaS trial account.  Is the fix the same, can anybody in AppDynamics check? HTTP Status 400 - Error while processing SAML Authentication Respons... See more...
I am getting the same error, after logging in to my SaaS trial account.  Is the fix the same, can anybody in AppDynamics check? HTTP Status 400 - Error while processing SAML Authentication Response - see server log for details type Status report messageError while processing SAML Authentication Response - see server log for details descriptionThe request sent by the client was syntactically incorrect.
Hi Splunkers, I need to make a statistical table to show me the hosts and each sourcetype that it generates and the count for each sourcetype with a column that calculates the total count and most ... See more...
Hi Splunkers, I need to make a statistical table to show me the hosts and each sourcetype that it generates and the count for each sourcetype with a column that calculates the total count and most importantly a column with a sample event from each sourcetype. I want it to be something like the attached table: Can someone please help me with the search that provides me with such a table? I have tried to make such a table using the following command (without the raw log): | tstats values(sourcetype) count where index=* by host | sort - count but the above search counts only the total of all the sourcetypes Then I have tried a different search: index=* | chart count OVER host BY sourcetype useother=false limit=0 but again this is not an accurate search for what I want.     Much Thanks  Murad Ghazzawi  
Hi, I am trying to find a way to replace numbers in strings with an asterisk, if they are concatenated with one, and if not then also with one, using rex field, example: AA-1234-12-A BB-1-132-B-1... See more...
Hi, I am trying to find a way to replace numbers in strings with an asterisk, if they are concatenated with one, and if not then also with one, using rex field, example: AA-1234-12-A BB-1-132-B-1 56-CC-1-345 to be replaced with: AA-*-*-A BB-*-*-B-* *-CC-*-* I tried multiple sed commands from the internet but they either don't work properly in splunk or do not solve my issue in the exact. Many thanks