All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am running the below query, sourcetype="email" | rename SenderAddress as indicator |lookup tci indicator output type,rating|where isnotnull(type)|dedup indicator|table indicator FromIP rating  typ... See more...
I am running the below query, sourcetype="email" | rename SenderAddress as indicator |lookup tci indicator output type,rating|where isnotnull(type)|dedup indicator|table indicator FromIP rating  type  It all works fine except if I add a field named attrib.val or tag.name like below sourcetype="email" | rename SenderAddress as indicator |lookup tci indicator output type,rating,tag.name|where isnotnull(type)|dedup indicator|table indicator FromIP rating  type tag.name It throws error like, Error in 'lookup' command: Could not find all of the specified destination fields in the lookup table. But actually I have a field named tag.name in tci lookup. I suspect if it is because of"." in the field names. Kindly suggest.
Hello,   I' m currently working on how to make dashboard with our Server's VM Count logs. Our logs are being collected as daily basis, I'm trying to show the count trend using trellis by data cent... See more...
Hello,   I' m currently working on how to make dashboard with our Server's VM Count logs. Our logs are being collected as daily basis, I'm trying to show the count trend using trellis by data center.   The command are like below. host=[HOST] index=[INDEX] sourcetype=[SRC_TYPE] source=[SRC] | timechart limit=0 span=1d sum(vm.count) as VM by center   If I make single value trellis viz with above command, I found the difference of VM count is only shown as daily basis. Like the pic below   I want to make trendinterval option value to dynamically change if I click time picker to change time range. Like, If I change time range to Last 90days, then showing me the 
Hi, i have Total with me, (Table A) Name            Total a                      1000 b                       1600 c                       2500 Table B From          To              Stage 0  ... See more...
Hi, i have Total with me, (Table A) Name            Total a                      1000 b                       1600 c                       2500 Table B From          To              Stage 0                  1000          Excellent 1000          1500          Good 1500          2000         Poor i need to join Total with From & To in order to get Stage.     Thanks.
Hello , I would like to setup a monitor for my AWS RDS Mysql  .  I followed the guide to setup a VM in my AWS platform to run the Agent .  I confirmed the Agent can reach out to both the Appdynamic... See more...
Hello , I would like to setup a monitor for my AWS RDS Mysql  .  I followed the guide to setup a VM in my AWS platform to run the Agent .  I confirmed the Agent can reach out to both the Appdynamic Controller and the AWS RDS Mysql . However in the last step , it shows failure to connect the database to the Appdynamic Controller .   Any idea ? Is it required to open the AWS RDS Mysql to the public ? Because currently the RDS mysql is running within private VPC which can be connected only from the VM which is running the agent .
Trying to understand how lookups are handled during app upgrade.  If I upgrade an app, will an existing lookup be overwritten by a modified version of the lookup? This pertains to a single instance ... See more...
Trying to understand how lookups are handled during app upgrade.  If I upgrade an app, will an existing lookup be overwritten by a modified version of the lookup? This pertains to a single instance only, not SHC.
One of the search queries provides a TimerName and an ID as a field. Another search provides the TYPE of the ID as a field. I need to display the number of times the timerName gets called for the dif... See more...
One of the search queries provides a TimerName and an ID as a field. Another search provides the TYPE of the ID as a field. I need to display the number of times the timerName gets called for the different type of these ids. How do I create a search which extracts the timerName and ID from the first search and the TYPE from the second search (using the ID from the first search) and displays the count of the TimerName called for the different TYPEs?  
Hi, We want to be able to tag our host assets to help filter on prod and non-prod environments. We can't use dest because the IPs are constantly changing but the hosts where the logs come from are c... See more...
Hi, We want to be able to tag our host assets to help filter on prod and non-prod environments. We can't use dest because the IPs are constantly changing but the hosts where the logs come from are constant and the environment values we actually want to tag.  Is there a way we can alter or configure the asset lookup so it tags the asset hosts as well?
is it possible to use access token based for Splunk APIs and Splunk SDK instead of username and password?
I'm creating demand and supply curves which use streamstats to accumulate demand and supply in order to intercept the curves (and thereby visually display the market price). Streamstats is used on th... See more...
I'm creating demand and supply curves which use streamstats to accumulate demand and supply in order to intercept the curves (and thereby visually display the market price). Streamstats is used on the "Volume" field since supply is presented in bands (e.g.  progressively adding 1,000 units sold @ $2 and 2,200 units sold @ $3, etc) so I need to progressively add these Volume bands/steps in order to make a positive gradient curve.   | search participant_name ="*" code = "*" offer_type = "Supply" | where Price >= 0 AND Price < 7 | streamstats sum(Volume) as Cumm1_GJ | eval Supply_GJ = round(Cumm1_GJ,0) On the other hand, demand again is presented in bands but the curve has a negative gradient. Therefore, I used the reverse command to reverse the actions of streamstats - i.e. starting at 40,000 Volume units demanded @ $0, then 38,000 Volume units demanded @ $1, etc.   | appendcols [ search participant_name ="*" code = "*" offer_type = "Demand" | streamstats sum(Volume) as Cumm2_GJ | reverse | eval Demand_GJ = round(Cumm2_GJ, 0)] | table Price, Supply_GJ, Demand_GJ   This is where I'm stuck. I do not get the two curves I expected to see in Visualization.
Currently running Tenable Add-On for Splunk v4.0.1.  It initially worked and allowed me to input an account (within the Configuration tab) and Input (within the inputs tab). Now the Inputs >> Accoun... See more...
Currently running Tenable Add-On for Splunk v4.0.1.  It initially worked and allowed me to input an account (within the Configuration tab) and Input (within the inputs tab). Now the Inputs >> Account tab does not populate, there is a perpetual spinning animation and "loading". When going to Inputs, the account is no longer found in the saved input and cannot be added to it or a new input. I navigated to \Splunk\etc\apps\TA-tenable\local and verified that "ta_tenable_account.conf" was found, and within is the Tenable.sc instance IP, account, secret password etc all there. Should I try re-installing the app?  I have already restarted the Splunk instance.   Splunkd log has A LOT of TA-tenable\bin\tenable_securitycenter.py" errors from \Splunk\bin\Python3.exe" to name a few.
We need to run the same query over a list of values (10k to 100k) without knowing the exact key across various indexes where they might show up.  What's the best way to do this in a scalable way. Fo... See more...
We need to run the same query over a list of values (10k to 100k) without knowing the exact key across various indexes where they might show up.  What's the best way to do this in a scalable way. For example, same search over a list of users (user1 OR user2 OR user3) | stats count by index The desired output should be a table with both user and index as columns, not just the index. But again, the field is not known ahead of time as it varies by index (or could simply be in _raw) Should one use map or how to assign each of user_n to a variable?          
Hi, I am trying to compare a field (Job duration) with its weekly average. Something is wrong with my join. It is returning only the first row's values from the main search. Here is the query: ind... See more...
Hi, I am trying to compare a field (Job duration) with its weekly average. Something is wrong with my join. It is returning only the first row's values from the main search. Here is the query: index="n" | stats values(Job) by Date, Duration, Status | join lower(Job) max=0 [ search index="n" earliest=-8d | stats count(eval(if( Date>relative_time(now(),"-d"),NULL,1))) as weekly_total, sum(eval(if(Date>relative_time(now(),"-d"), 0,Duration))) AS total_duration by Job | eval Weekly_Avg=(total_duration/weekly_total) ] | table Job, Duration, Weekly_Avg, Status | dedup Job   Data: Job        Duration      Date                    Status A              5                    2021-03-03     Success B              9                    2021-03-03     Failed A              5                    2021-03-02     Success B              8                    2021-03-02     Success A              6                    2021-03-01     Success B              7                    2021-03-01     Success   What I want: Job    Duration     Weekly Avg     Status A         5                     5.5                      Success B         9                     7.5                      Failed   What I get: Job    Duration             Weekly Avg     Status A         5                            5.5                      Success B         5 (from JobA)   7.5                     Success (from Job A)   ** Edit: I am finding  there are duplicate rows in my data (exactly same data) which is also not helping. TIA
Moving my instance from Splunk Enterprise on vmware to a docker container. It runs okay with the volumes I created but when I copy my /opt/splunk/etc contents from the old server to migrate it to the... See more...
Moving my instance from Splunk Enterprise on vmware to a docker container. It runs okay with the volumes I created but when I copy my /opt/splunk/etc contents from the old server to migrate it to the Docker Container I get this error. Splunk support is saying something is wrong with my docker-compose but I am able to build other containers with it no problem. I did find that status code 401 in the documentation for the HEC.  I think it is a permission issue but have gone through the whole /opt/splunk/etc/auth file and it looks good. Any ideas? TASK [splunk_standalone : Setup global HEC] ************************************ s01 | fatal: [localhost]: FAILED! => { s01 | "cache_control": "private", s01 | "changed": false, s01 | "connection": "Close", s01 | "content_length": "130", s01 | "content_type": "text/xml; charset=UTF-8", s01 | "date": "Wed, 03 Mar 2021 20:36:34 GMT", s01 | "elapsed": 0, s01 | "redirected": false, s01 | "server": "Splunkd", s01 | "status": 401, s01 | "url": "https://127.0.0.1:8089/services/data/inputs/http/http", s01 | "vary": "Cookie, Authorization", s01 | "www_authenticate": "Basic realm=\"/splunk\"", s01 | "x_content_type_options": "nosniff", s01 | "x_frame_options": "SAMEORIGIN" s01 | } s01 | s01 | MSG: s01 | s01 | Status code was 401 and not [200]: HTTP Error 401: Unauthorized
Hi all,    I have deployed the splunk Addon for Nix on my Linux Server and enabled the top.sh script. The script does not return my full command names, it does add a '+' plus sign at the end of co... See more...
Hi all,    I have deployed the splunk Addon for Nix on my Linux Server and enabled the top.sh script. The script does not return my full command names, it does add a '+' plus sign at the end of command which are longer. The log looks like this    826 root 20 0 474240 8664 6640 S 0.0 0.1 18:36.67 NetworkMan+     When I run the script locally it does show the entire COMMAND Name    826 root 20 0 474240 8664 6640 S 0.0 0.1 18:36.72 NetworkManager   Here is my props.conf  section for it    # The "app" field is the conjunction of COMMAND plus ARGS # Note that the UNIX app joins arguments with an underscore. EVAL-app = if(ARGS!="<noArgs>", COMMAND." ".ARGS,COMMAND) EVAL-process = if(ARGS!="<noArgs>", COMMAND." ".ARGS,COMMAND) EVAL-process_name = replace(COMMAND, "[\[\]()]", "") # Truncate needless leading zeroes from the cumulative CPU time field. EVAL-cpu_time = replace(CPUTIME, "^00:[0]{0,1}", "") EVAL-time = replace(CPUTIME, "^00:[0]{0,1}", "") # UsedBytes is calculated as RSZ_KB*1024. Previously it was calculated using # %MEM and the "Mem:" header from "top -bn 1", which tended to underestimate # compared to this value. This is a rough measure of resident set size (i.e., # physical memory in use). EVAL-mem_used=RSZ_KB*1024 EVAL-UsedBytes=RSZ_KB*1024 [time] SHOULD_LINEMERGE=false LINE_BREAKER=^((?!))$ TRUNCATE=1000000 DATETIME_CONFIG = CURRENT [source::...top.sample] sourcetype = top HEADER_MODE = always SHOULD_LINEMERGE = false [top] SHOULD_LINEMERGE=false LINE_BREAKER=(^$|[\r\n]+[\r\n]+) TRUNCATE=1000000 DATETIME_CONFIG = CURRENT KV_MODE=multi FIELDALIAS-user = USER as user FIELDALIAS-process = COMMAND as process FIELDALIAS-cpu_load_percent = pctCPU as cpu_load_percent EVAL-vendor_product = if(isnull(vendor_product), "NIX", vendor_product)   top.sh script    . `dirname $0`/common.sh HEADER=' PID USER PR NI VIRT RES SHR S pctCPU pctMEM cpuTIME COMMAND' PRINTF='{printf "%6s %-14s %4s %4s %6s %6s %6s %2s %6s %6s %12s %-s\n", $1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12}' CMD='top' if [ "x$KERNEL" = "xLinux" ] ; then CMD='top -bn 1' FILTER='{if (NR < 7) next}' HEADERIZE='{NR == 7 && $0 = header}' assertHaveCommand $CMD $CMD | tee $TEE_DEST | $AWK "$HEADERIZE $FILTER $FORMAT $PRINTF" header="$HEADER" echo "Cmd = [$CMD]; | $AWK '$HEADERIZE $FILTER $FORMAT $PRINTF' header=\"$HEADER\"" >> $TEE_DEST              
I'm working with the Splunk TA ONTAP 2.1.7 and the NetApp A400 AFF.  The syslog-ng farm we have is receiving the syslog events being sent from the NetApp.  Two problems.  First is the messages don't ... See more...
I'm working with the Splunk TA ONTAP 2.1.7 and the NetApp A400 AFF.  The syslog-ng farm we have is receiving the syslog events being sent from the NetApp.  Two problems.  First is the messages don't look like what is in the sample files. Mar 3 19:44:28 10.16.48.250 NetApp: NetApp: hex_digits_1.hex_digits_2 hex_digits_3 Wed Mar 03 2021 11:44:27 -08:00 [kern_audit:info:9521] hex_digits4 :: NetApp:http :: aa.bb.cc.dd:port :: NetApp:CS\script :: GET /spi/NetApp/etc/log/stats/ccma/kernel/opm/078029_000300_1614800101000_0239077.ccma.gz HTTP/1.1 :: Success: 200 OK   7-mode samples Dec 9 09:48:53 10.0.1.40 Dec 9 09:44:45 [cluster07:kern.syslogd.restarted:info]: syslogd: Restarted. ","10.0.1.40",ontap,"udp:514","ontap:syslog","2016-12-09 09:48:53" Dec 9 09:53:06 10.0.1.40 Dec 9 09:48:58 [cluster07:iscsi.notice:notice]: ISCSI: New session from initiator iqn.1991-05.com.microsoft:cdslwin07 at IP addr 10.0.1.1 ","10.0.1.40",ontap,"udp:514","ontap:syslog","2016-12-09 09:53:06"   C-mode samples Dec 9 11:21:35 10.0.1.39 Dec 9 11:22:01 [crest-cluster01-01:raid.rg.media_scrub.done:notice]: /aggr1/plex0/rg0: media scrub completed in 23:57.00 ","10.0.1.39",ontap,"udp:514","ontap:syslog","2016-12-09 11:21:35" Dec 9 11:23:39 10.0.1.56 Dec 9 11:21:04 [crest-cluster01-01:raid.rg.media_scrub.start:notice]: /aggr1/plex0/rg0: start media scrub ","10.0.1.56",ontap,"udp:514","ontap:syslog","2016-12-09 11:23:39"   The 7 and C modes don't look anything like the syslog events that are in the syslog-ng.  The second issue is the messages are being truncated.   Mar 3 20:11:04 NetApp NetApp: NetApp: hex_digits_1.hex_digits_2 hex_digits_3 Wed Mar 03 2021 12:11:04 -08:00 [kern_audit:info:1817] hex_digits_4 :: NetApp:ontapi :: aa.bb.cc.dd:port :: NetApp:DC\script :: <netapp version='1.0' xmlns='http://www.netapp.com/filer/admin' nmsdk_version='9.5' nmsdk_platform='Windows Server 2016' nmsdk_language='Java'><diagnosis-alert-get-iter><query><diagnosis-alert-info><monitor>system-connect|node-connect</monitor><subsystem>FHM-Switch|metrocluster_node|metrocluster|fhm-bridge|sas_connect</subsystem><alert-id>InterclusterBrokenConnectionAlert|InterconnectAdapterOfflineAlert|RaidDegradedMirrorAggrAlert|RaidLeftBehindAggrAlert|RaidLeftBehindSpareAlert|StorageFCAdapterFault_Alert|ClusterSeveredAllLinksAlert|NoISLPresentAlert|FabricSwitchFanFail_Alert|FabricSwitchPowerFail_Alert|FabricSwitchTempCritical_Alert|FabricSwitchUnreachable_Alert|StorageBridgePortDown_Alert|StorageBridgeTempAboveCritical_Alert|StorageBridgeTempBelowCritical... :: Pending:   The above contains incomplete XML.  We have proved to ourselves that the listening port can take longer messages so the thought is the message is truncated at the sender.   If anyone has any ideas about why the messages don't look similar, I don't control the NetApp only Splunk, or why the messages get truncated, please let me know.   TIA Joe   Splunk : 7.3.6 OS : Red Hat Enterprise Linux release 8.3 (Ootpa)
Hello, @scelikok Could you help me on the following search please? I have a main search which groups me together all the events with a unique ID (these events are critical, warning and normal al... See more...
Hello, @scelikok Could you help me on the following search please? I have a main search which groups me together all the events with a unique ID (these events are critical, warning and normal alerts that I index on Splunk). I want to add a sub-search to my main search which could allow me to add other events in the form of a transaction. My problem here is that my unique ID in my main search is not the same as in my sub search. this is what I want to do : index=index1 (severity=2 OR severity=0 OR severity=1 OR (severity="-1" AND Function=Traps)) | eval ID=Service+"_"+Env+"_"+Apps+"_"+Function+"_"+managed_entity+"_"+varname | addinfo | append [search index=index_sqlprod-itrs_toc (managed_entity="vpw-neorc-103 - rec" OR managed_entity="vpw-neorc-903 - rec") rowname="ASC RecordingControl" | eval ID=Service+"_"+Env+"_"+Apps+"_"+Function+"_"+varname | addinfo | sort _time asc | eval peer_failed=if(severity=2,1,-1) | streamstats sum(peer_failed) as failed_peers by ID | eval failed_peers=if((failed_peers=1) AND (severity="0" OR severity="-1"),3,failed_peers) | where NOT (failed_peers=1 OR failed_peers=0) | sort _time desc | transaction ID startswith=(failed_peers=2) endswith=(failed_peers=3) maxevents=2] | transaction ID startswith=(severity=2) maxevents=2  my goal is to bring these transactions together without having the same ID, is that possible?
we are trying to install the eStreamer add on on a Linux box but we can't. Everytime that we tried nothing happen no error nothing but it doesn't install anyone that have any experience with this in... See more...
we are trying to install the eStreamer add on on a Linux box but we can't. Everytime that we tried nothing happen no error nothing but it doesn't install anyone that have any experience with this installation   thank you 
Here is my Splunk Query: index=test "Entry Done for Id=" | rex field=_raw Id=(?<Id>.*?)# | rex field=_raw UserID=(?<UserId>.*?)# | rex field=_raw Amount=(?<Amount>.*?)# | rex field=_raw Percentage... See more...
Here is my Splunk Query: index=test "Entry Done for Id=" | rex field=_raw Id=(?<Id>.*?)# | rex field=_raw UserID=(?<UserId>.*?)# | rex field=_raw Amount=(?<Amount>.*?)# | rex field=_raw PercentageAmount=(?<PercentageAmount>.*?)#  | stats list(Id) as "Unique Id" list(UserID) as "User ID" list(Amount) as "Given Amount" list(PercentageAmount) as "Override Amount" | table "Unique ID" "User ID" "Given Discount" "Override Amount" I want to filter those records from this table which has  Given Amount>50.00 OR Override Amount>90.00 Note:  Given Amount and Override Amount can be in decimal.
Hi Splunk Community, I noticed that in the "All configurations" menu in the Splunk UI (Settings > All configurations) that calculated fields are missing. We have been using the "All configurations" ... See more...
Hi Splunk Community, I noticed that in the "All configurations" menu in the Splunk UI (Settings > All configurations) that calculated fields are missing. We have been using the "All configurations" page to reassign objects once we have finished developing them for our various tenants but I haven't been able to figure out how to reassign calculated fields We are on version 8.0.3 in case that's relevant. Anyone have any idea how to reassign calculated fields in the UI?
The "Device Port" and "Port" fields are incorrectly extracted in messages of the CSCOacs_Failed_Attempts kind in the Technology Add-on for Cisco Secure Access Control Server (ACS). I have been able ... See more...
The "Device Port" and "Port" fields are incorrectly extracted in messages of the CSCOacs_Failed_Attempts kind in the Technology Add-on for Cisco Secure Access Control Server (ACS). I have been able to solve it by adding the following to props.conf:   EXTRACT-acs_device_port = ,\s+Device\s+Port=(?<Device_Port>[^,]+) EXTRACT-acs_port = ,\s+Port=(?<Port>[^,]+)   Has anyone had the same problem? Any other ideas? @dshpritzCould you see if this fix could be included in a future versión of the App, please? Thanks!