All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All..,  Pretty new to ITSI and i have created a GlassTable with KPI Services - CPU, Memory and Disk Utilization.  now i would like to add the number of episodes for the service CPU, but i am... See more...
Hi All..,  Pretty new to ITSI and i have created a GlassTable with KPI Services - CPU, Memory and Disk Utilization.  now i would like to add the number of episodes for the service CPU, but i am not sure how to get the number of episodes for a given service. any ideas, suggestions please.    This below image shows the number of episodes occurred for memory_usage. Here we have grouped the events... so i need this count to be appeared on the glass table. 
Hey there, I have extracted chart data from the raw field into multivalue fields. But I can't chart the data since splunk doesn't recoginse the the fields as numbers. x_axis y_axis -1.292... See more...
Hey there, I have extracted chart data from the raw field into multivalue fields. But I can't chart the data since splunk doesn't recoginse the the fields as numbers. x_axis y_axis -1.292015 -1.282425 -1.27523 -1.26725 -1.258461 -1.248871 4.9024 5.129161 5.200173 5.327875 5.909696 6.406182   I have tried to convert it using: |eval x_axis2=tonumber(trim(x_axis)) or: | Convert num(x_axis) but both didn't work. Could anybody help me out here?
I am trying to build a custom add-on in Splunk using "Splunk Add-on builder" using Shell Script input configuration. The POST cURL request from Splunk errors out. The same cURL is successful from CLI... See more...
I am trying to build a custom add-on in Splunk using "Splunk Add-on builder" using Shell Script input configuration. The POST cURL request from Splunk errors out. The same cURL is successful from CLI.  Please help.    
Hi, I have a requirement to use write a splunk query which uses ES based data model to better make use of the fields provided and also I want to limit my search to my custom index values. So, for e... See more...
Hi, I have a requirement to use write a splunk query which uses ES based data model to better make use of the fields provided and also I want to limit my search to my custom index values. So, for example I want to make use of Authentication.Authentication to return fields action and _time using Authentication datamodel and the index values limited to A,B,C only and I tried with a query like below and it doesn't work.   |`tstats` count from datamodel=Authentication.Authentication by _time,Authentication.action span=10m |where Authentication.index IN(index=A,B,C)|timechart minspan=10m count by Authentication.action|`drop_dm_object_name("Authentication")` Thanks in advance!!  
Is it possible to populate the intermediate missing data along the "Target" column in a continuous fashion over the period of time based on the start and end values _time Total Target ... See more...
Is it possible to populate the intermediate missing data along the "Target" column in a continuous fashion over the period of time based on the start and end values _time Total Target 2020-08-01   600 2020-09-15 1000   2020-09-16 1250   2020-09-17 1500   2020-09-18 1750   2020-09-19 2000   2020-09-20 2250   2020-09-21 2750   2020-09-22 3250   2020-10-05   5500  
I am working on improving usage of the risk framework within our instance of Splunk ES. At present there are a number of correlations that only generate risk scores, and we have alerting on risk obj... See more...
I am working on improving usage of the risk framework within our instance of Splunk ES. At present there are a number of correlations that only generate risk scores, and we have alerting on risk objects with high risk scores.  While this approach reduces noise in IR, it does make it less intuitive to investigate the reasons a risk object scores so high. I would like to find a way to picks up on what risks events were triggered for an object, and offer me appropriate workflows.   So if an alert triggered for high risk objects, with the following risks events: Malware detected Suspicious Web Activity Individual Workflows would be offered to help investigate each of these risks. Has anyone else done something like this before?  Is it even possible? 
Hi there,  Looking into /opt/splunk/etc/system/local/authorize.conf I saw alot of configurations as below.  Would like to understand how this came about, and is it of any concern? transition_revie... See more...
Hi there,  Looking into /opt/splunk/etc/system/local/authorize.conf I saw alot of configurations as below.  Would like to understand how this came about, and is it of any concern? transition_reviewstatus-10_to 11 = enabled transition_reviewstatus-10_to 12 = disabled transition_reviewstatus-10_to 13 = depreciated transition_reviewstatus...... transition_reviewstatus...... Searching the internal logs gives this - index=_internal component=AuthorizationManager 09-22-2020 15:15:25.219 +0800 WARN AuthorizationManager - Capability 'transition_reviewstatus-9_to 8' is not recognized by Splunk. Ignoring...
Hi All, We have configured rsyslog as such for port 9001 on two rsyslog server. when the UDP port sends directly to server it works. however , we use a F5 load balancer data is not coming. The healt... See more...
Hi All, We have configured rsyslog as such for port 9001 on two rsyslog server. when the UDP port sends directly to server it works. however , we use a F5 load balancer data is not coming. The health rule is configured as UDP but not working. below is the output which i see frequently Sep 22 11:13:10 default send string Sep 22 11:13:15 default send string Sep 22 11:13:15 default send string Sep 22 11:13:20 default send string Sep 22 11:13:20 default send string Sep 22 11:13:25 default send string Sep 22 11:13:25 default send string ----- rsyslog configuration [root@auvlud1prapp62 rsyslog.d]# cat 99-mainframe-port9001.conf # rsyslog configuration for central logging # Note: 'rsyslog-central' must be replaced to match your hostname # 'localhost' is expected to work, but some persistent cases shown that only # setting to the real value of the host name prevents from logging local log duplicated # in remote location # provides TCP syslog reception #$ModLoad imtcp #$InputTCPServerRun 9001 $ModLoad imudp $UDPServerRun 9001 # Set the global dynamic file $template PerHost, "/apps/log/mainframe/mainframe-%$YEAR%-%$MONTH%-%$DAY%.log" if ($hostname != 'hostname') then ?PerHost & stop
Hi, As newcomer to splunk , i have the following ironport log : <38>Sep 22 02:15:35 mail_logs: Info: Message finished MID 3035876 done <38>Sep 22 02:15:35 mail_logs: Info: MID 3035876 quarantined ... See more...
Hi, As newcomer to splunk , i have the following ironport log : <38>Sep 22 02:15:35 mail_logs: Info: Message finished MID 3035876 done <38>Sep 22 02:15:35 mail_logs: Info: MID 3035876 quarantined to "Virus" (a/v verdict:VIRAL) <38>Sep 22 02:15:34 mail_logs: Info: MID 3035877 was generated based on MID 3035876 by antivirus <38>Sep 22 02:15:32 mail_logs: Info: MID 3035876 attachment 'Revised=20Order.doc' <38>Sep 22 02:15:32 mail_logs: Info: MID 3035876 antivirus positive 'CXmail/RtfObf-D' <38>Sep 22 02:15:32 mail_logs: Info: MID 3035876 interim AV verdict using Sophos VIRAL <38>Sep 22 02:15:32 mail_logs: Info: MID 3035876 was too big (1456210/1048576) for scanning by CASE <38>Sep 22 02:15:32 mail_logs: Info: MID 3035876 matched all recipients for per-recipient policy DEFAULT in the inbound table <38>Sep 22 02:15:31 mail_logs: Info: MID 3035876 ready 1456210 bytes from <vivek.sood@swiftsecuritas.in> <38>Sep 22 02:15:31 mail_logs: Info: MID 3035876 Subject 'Revised Order 21-09-20' <38>Sep 22 02:15:31 mail_logs: Info: MID 3035876 Message-ID '<2132122449.43046.1600730091044.JavaMail.zimbra@swiftsecuritas.in>' <38>Sep 22 02:15:31 mail_logs: Info: MID 3035876 DMARC: Verification passed <38>Sep 22 02:15:31 mail_logs: Info: MID 3035876 DMARC: Message from domain swiftsecuritas.in, DMARC pass (SPF aligned True, DKIM aligned True) <38>Sep 22 02:15:31 mail_logs: Info: MID 3035876 DKIM: pass signature verified (d=swiftsecuritas.in s=73FEA6D0-E5D5-11EA-A7BE-617208D79BCE i=@swiftsecuritas.in) <38>Sep 22 02:15:13 mail_logs: Info: MID 3035876 SPF: mailfrom identity vivek.sood@swiftsecuritas.in Pass (v=spf1) <38>Sep 22 02:15:11 mail_logs: Info: MID 3035876 SPF: helo identity postmaster@mx.gulshanindia.com None <38>Sep 22 02:15:11 mail_logs: Info: MID 3035876 ICID 1856276 RID 0 To: <info@mycompany.com> <38>Sep 22 02:15:11 mail_logs: Info: MID 3035876 ICID 1856276 From: <vivek.sood@swiftsecuritas.in> <38>Sep 22 02:15:11 mail_logs: Info: Start MID 3035876 ICID 1856276 I have extract the field and i  want to create a table to get statistic: table sender,message_subject,recipient,quarantine_dest,reason,virus_vendor_category When i try it, i got a table per one line. How to concatenate all line to get all statistics, please Rgds silverem
Hi! I am trying to deal with some technical debt, and I thought I had an understanding of what I needed.   Objective: I have some search artifacts inside a custom app on my Search Head Deployer th... See more...
Hi! I am trying to deal with some technical debt, and I thought I had an understanding of what I needed.   Objective: I have some search artifacts inside a custom app on my Search Head Deployer that once deployed, users are unable to delete. On the Deployer, those searches were in the Default folder. I did give this what I thought was a thorough reading https://docs.splunk.com/Documentation/Splunk/7.2.10/DistSearch/PropagateSHCconfigurationchanges, but I may have missed something. What I tried: 1) Backup the app 2) Moved the contents on the Deployer Server of shcluster/apps/app_name/default/savedsearches.conf to local/savedsearches.conf 3) I updated the local/app.conf to a "deployer_push_mode = full". What appeared to happen: It seems that it may have merged the deployer local folder into the default on the cluster, which may make sense after another spin through the document. What's the best approach for cleaning up this app? Should I just deploy it to  my Search Head Deployer as an app, then remove the searches I want to clean up, then redeploy to the cluster? Thanks! Stephen
Regarding the migration of knowledge objects from a standalone searchhead 'search' app to searchhead clustering environment, the docs recommends - Migrate_settings_to_a_search_head_cluster  Copy th... See more...
Regarding the migration of knowledge objects from a standalone searchhead 'search' app to searchhead clustering environment, the docs recommends - Migrate_settings_to_a_search_head_cluster  Copy the .../search/local directory in the temporary directory to a new app directory, such as search_migration_app, in the temporary directory. Do not name this new app "search." Following which after pushing from the deployer, should the knowledge objects be moved back to the search app via the search head?? The concern is that users will be using the 'search' and 'search_migration_app' and have to look through the 'search' and 'search_migration_app' for the objects. Should the knowledge objects remain in the 'search_migration_app' and new objects to be saved into 'search' app? Is there a way to bulk move the objects easily from the 'search_migration_app' to 'search' app?
Hi Team, We need to ingest DNS Logs in Splunk Version 8.0 without using Universal Forwarder. Kindly let us know how we can proceed on this further.   Regards, Ramnesh Dubey
Hi,   One of our customer is not able to upgrade an app, they are seeing this error as soon as they upload the app file. The "id" field found in app.conf is already in use by another application ... See more...
Hi,   One of our customer is not able to upgrade an app, they are seeing this error as soon as they upload the app file. The "id" field found in app.conf is already in use by another application I've looked at this post - https://community.splunk.com/t5/Archive/How-to-update-an-add-on-in-Splunk-store/m-p/405426 My customer confirmed that he still sees the same problem even after following the steps from above link. Is there any alternative of working around this ? What can be the reason for this issue ?   Thanks, Arun  
In the download page of Universal forwarder, the only option available for Splunk UF is the below one for devices having ARM processors and running Debian OS. And it is for ARMv6. Will this also work... See more...
In the download page of Universal forwarder, the only option available for Splunk UF is the below one for devices having ARM processors and running Debian OS. And it is for ARMv6. Will this also work on devices with ARMv7 architecture?    
we want to check any zero-logon exploit in the environment, is there splunk search available? how to detect malicious rpc connection? thanks
Hi, I want to understand the License used by each instance in Splunk.Can anyone help me in understanding the below points? How can we calculate the license used by each splunk instance(indexer,sea... See more...
Hi, I want to understand the License used by each instance in Splunk.Can anyone help me in understanding the below points? How can we calculate the license used by each splunk instance(indexer,search head etc) because in Monitoring console(MC) i can see the overall license used by a pool.How can i differentiate ? What is the source of licenseusage.log file and how it is generated? In MC Todays license usage panel query consists of | eval usedGB=round(used/1024/1024/1024,3)  What does this mean? Thanks you in Advance! Cheers!
I have an item to search withing logs with the schema similar to one below.  It is kind of searching for certain uri and status within a dynamic list of items.. item_1, item_2, item_N log": { "... See more...
I have an item to search withing logs with the schema similar to one below.  It is kind of searching for certain uri and status within a dynamic list of items.. item_1, item_2, item_N log": { "type": "web""datetime": "xxxxx" "data": { "item_1": { "httpstatus": "200", "path": "/pr/s1" }, "item_2":  { "httpstatus": "200", "path": "/pr/s2" } } } I am kind of wondering how to make Item_*  search with a regex. So , in this case it's the field name which would need regex.  Any pointers on where to start
Hi Everyone, I have one requirement. I have one dashboard which several URL's and its showing the count as total. Like I select 12th to 13th sep then its showing like this: URL                   ... See more...
Hi Everyone, I have one requirement. I have one dashboard which several URL's and its showing the count as total. Like I select 12th to 13th sep then its showing like this: URL                                                           Count /light/page/jk                                   1184  /light/o/Case/home                      110 I have one date drop-down.I want when I select say 12th to 13th sep then instead of showing total count it should show individual counts and then their difference. URL                        Count1                    Count2                    Difference /light/page/jk        45                                     35                          10 My serach query: index="ABC" sourcetype=XYZ Timeout |stats count by URL | sort -count Can someone guide me How Can I achieve this. Thanks in advance.  
I have Splunk Enterprise setup with SSO enabled with Okta. Provisioning of users is also done by Okta. I want to generate an API Access token for these users at the time of provisioning. 1) Is ther... See more...
I have Splunk Enterprise setup with SSO enabled with Okta. Provisioning of users is also done by Okta. I want to generate an API Access token for these users at the time of provisioning. 1) Is there any way to do that? 2) Can I get the token from IDp (in this case Okta)?  
Hi i am new to splunk dashboard   I have events like this from here, how i ll get cpu and memory usage? can any one help on this?? <182>2020-09-18T08:01:18.787Z vmkernel: cpu56:6319637)Sched: vm ... See more...
Hi i am new to splunk dashboard   I have events like this from here, how i ll get cpu and memory usage? can any one help on this?? <182>2020-09-18T08:01:18.787Z vmkernel: cpu56:6319637)Sched: vm 6319638: 6193: Adding world 'vmm0:bcollab-sie-lx', group 'host/user', cpu: shares=-1 min=-1 minLimit=-1 max=-1, mem: shares=-1 min=-1 minLimit=-1 max=-1 <182>2020-09-18T08:07:19.325Z vmkernel: cpu48:6320125)Sched: vm 6320126: 6193: Adding world 'vmm0:burp-collab-sie', group 'host/user', cpu: shares=-1 min=-1 minLimit=-1 max=-1, mem: shares=-1 min=-1 minLimit=-1 max=-1 <182>2020-09-18T07:26:07.290Z vmkernel: cpu34:6317318)Sched: vm 6317319: 6193: Adding world 'vmm0:burpcollab-sie', group 'host/user', cpu: shares=-1 min=-1 minLimit=-1 max=-1, mem: shares=-1 min=-1 minLimit=-1 max=-1