All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  I would like to monitor one value of each event. When it keeps increasing after 5 events, an alarm should be triggert.  I uase autoregress to generate the difference between the current event ... See more...
Hi,  I would like to monitor one value of each event. When it keeps increasing after 5 events, an alarm should be triggert.  I uase autoregress to generate the difference between the current event and previous event (see below). But how can I monitor that the difference keeps being positive after five events? Thank you very much! | autoregress C_avg as C_avg_prev | eval C_detal=C_avg-C_avg_prev  
Hi, please bear with me, I'm VERY new to Splunk. I've been googling trying to find the proper search, but I'm coming up empty.  We had someone make a change to an account in outlook. We need to kno... See more...
Hi, please bear with me, I'm VERY new to Splunk. I've been googling trying to find the proper search, but I'm coming up empty.  We had someone make a change to an account in outlook. We need to know who it was. It's not a typical user account. it's for a conference room to book it for meetings.  again, very new to Splunk. My boss asked me to try and figure this out. I appreciate any help you can offer. 
@links to members 'search earliest=-10m latest=now index= 'xyz' (host=abcd123 or host=abcd345) TxnStart2End| rex "Avg=(?<avgRspTime>\d+)"  | rex "count=(?<count>\d+)"  |timechart span=5m sum(coun... See more...
@links to members 'search earliest=-10m latest=now index= 'xyz' (host=abcd123 or host=abcd345) TxnStart2End| rex "Avg=(?<avgRspTime>\d+)"  | rex "count=(?<count>\d+)"  |timechart span=5m sum(count) as Vol, avg(avgrsptime) as "ART" | eval TPS=(vol/300) | table _time Vol Avgresptime TPS | sort_time'   the above query will fetch every 5 mins records so no worries but the issue is if the splunk job failed and run after half an hour for example:   suppose my job last run is 10:00am  and it fetch records until 10:00 AM for every 5 mins spam. my job got failed at 10:01 am and it will run again at 11:00 am, but in between 10:01 am to 11:00 am data is missing ( so my requirement is I need missing data in the spam of for every 5 mins) i.e 10:05 data, 10:10 data ...10:50, 10:55 and 11:00 data.. please help with correct query.
Hello, i am trying to anonymize data in forwarder using the below: The data AABC123456789012 needs to be transformed to AABC12XXXXXX9012 The regex seems to be not working. Any help is appreci... See more...
Hello, i am trying to anonymize data in forwarder using the below: The data AABC123456789012 needs to be transformed to AABC12XXXXXX9012 The regex seems to be not working. Any help is appreciated.  Mar 31 13:34:56 10.209.7.69 Mar 31 13:34:56 1234567890_admin yia0WAM 65.92.243.116 eyuiopppp.***.com 123.55.000.88 - AABC123456789012 [31/Mar/2022:13:34:39 -0400] 'GET /me-and-***/***intranetstandards/_assets-responsive/v1/fonts/trtr/rtyruroop-ghjtltutt-webfont.woff HTTP/1.1' 200 29480 erty-tyunht.pg.uhg.com 31/Mar/2022:13:34:39.531 -0400 6163 text/plain; charset=UTF-8 "https://****.yyy.com/assets/hr/css/*******.min.css" tranforms.conf [abcbc_isam] REGEX = 'AABC[0-9]{5,16}' DEST_KEY = _raw FORMAT = $1AABC[0-9]{2}XXXXXX[0-9]{4}$2   props.conf [host::AE110501] TRANSFORMS-set= abcbc_isam disabled = false
I have a query to search particular event id's from Active Directory and see what Targets these apply to.  Instead of listing 100 different AD groups, I chose to use a lookup table.  My query is as f... See more...
I have a query to search particular event id's from Active Directory and see what Targets these apply to.  Instead of listing 100 different AD groups, I chose to use a lookup table.  My query is as follows: index=<index name> EventID IN (4728,4729) TargetUserName IN [| inputlookup Test_Splunk_Lookup_Table_v2.csv | return 200 "$Group_Name"] | eval EventID=case(EventID=="4728","Added",EventID=="4729","Removed")| rename Computer AS "Domain Controller",TargetUserName AS "Group",EventID AS "Action"| table "_time","SubjectUserName","Action","MemberName","Group","Domain Controller" The search works well as long as the Group Names in the lookup tables are unique.  But if there is an entry in the lookup table that has derivatives(i.e. AD_Group), it returns all the derivatives also instead of what is in the lookup table only. EX. Lookup Table Group_Name column contains "AD_Group", "AD_Group_1", "AD_Group_2" The search returns all the above groups plus additional groups not in the lookup table; AD_Group_3, AD_Group_4, etc... I need to know how I can just return the entries in the list and not the derivatives of AD_Group.
I would like to understand how Splunk SOAR sends data to the indexer endpoints that are configured under Administration -> Search Settings -> Indexers. I would like to send data to two different HEC ... See more...
I would like to understand how Splunk SOAR sends data to the indexer endpoints that are configured under Administration -> Search Settings -> Indexers. I would like to send data to two different HEC endpoints (two different Splunk instances), but I'm not sure if Splunk SOAR treats multiple indexers as something to load balance or multiple things to send all data to. I attempted to use _TCP_Routing on one of the HEC endpoints to take care of this issue, but it doesn't seem to work right so I figured I'd go back to the source. Anyway, if anyone knows how that works, I'd appreciate the insight! Thanks.
I'm trying to run the following commands on an index:   | eval elast=strptime(lastSeen,"%Y-%m-%d %H:%M:%S") | eval daysSinceLastSeen = round((now() - elast)/86400, 1) ```Calculate days elapsed si... See more...
I'm trying to run the following commands on an index:   | eval elast=strptime(lastSeen,"%Y-%m-%d %H:%M:%S") | eval daysSinceLastSeen = round((now() - elast)/86400, 1) ```Calculate days elapsed since lastSeen``` | eval active_status = if ((latest (daysSinceLastSeen) <= 28), "active", "inactive")   There is an error that keeps popping up stating 'latest' function is unsupported or undefined. How do I correct that?
Hello, I have a field I created called daysSinceLastSeen that shows the days since an asset was last seen in a scan. I now want to create a histogram to show the distribution of that data by days. ... See more...
Hello, I have a field I created called daysSinceLastSeen that shows the days since an asset was last seen in a scan. I now want to create a histogram to show the distribution of that data by days. How do I do that in spl?   In case you need my search, it is as follows:   | eval elast=strptime(lastSeen,"%Y-%m-%d %H:%M:%S") | eval daysSinceLastSeen = round((now() - elast)/86400, 1) ```Calculate days elapsed since lastSeen``` | table _time, status, asset_id, scanID, lastSeen, daysSinceLastSeen, last*, firstSeen, ipaddress, source, host | sort - _time  
I am looking to set up an alert that will trigger when no messages have been sent to a queue in the last X number of minutes. Does any one have a sample of a similar alert? Thanks in advance!!
I have a list of switches on our network and once in a while some of them stop reporting to Splunk. I need a query that lists those switches not reporting to be able to create a dashboard
I see logs leaving the proxy to an external IP. How do I know the internal IP requesting that external site/IP
Scripted input not showing up in search results, but is running fine in server
Hello,  Does Splunk supports sound alerts in Enterprise dashboards based on the threshold in the query? Ex. I have a query where it shows the (statues > 100) or (statues < 100).  If the (statues ... See more...
Hello,  Does Splunk supports sound alerts in Enterprise dashboards based on the threshold in the query? Ex. I have a query where it shows the (statues > 100) or (statues < 100).  If the (statues >= 100), i would like to get a sound alert. I'm displaying these statuses in Trellis Single Value. Please let me know if it's possible to get an alert once it reaches a certain threshold and how would you set it up in Splunk dashboard? Thank you!
I'm having an issue with the authentication.conf file on my search head. I have the file managed in puppet with the necessary SAML configuration. When splunk restarts, the mapping of users to roles t... See more...
I'm having an issue with the authentication.conf file on my search head. I have the file managed in puppet with the necessary SAML configuration. When splunk restarts, the mapping of users to roles that was created by users signing into splunk gets cleared, because I don't want to have to keep adding users to that mapping. This results in searches becoming orphaned by "disabled users" even though the users are still valid, and when they sign in the next time the searches are no longer orphaned.  What is the proper way to manage this configuration? Should I be placing the SAML configuration in a separate location? or would the file still get modified in a separate location?
Hi, Please indulge me as I am relatively new to Splunk. I wish to create a query or report I can run on demand to provide proactive data from our client (Windows) machines, namely battery status,... See more...
Hi, Please indulge me as I am relatively new to Splunk. I wish to create a query or report I can run on demand to provide proactive data from our client (Windows) machines, namely battery status, CPU usage, disk space usage, along those lines. I found the below on Lantern, but, pardon my ignorance, but have no idea how i would implement this in a Splunk search.   | mstats avg(LogicalDisk.%_Free_Space) AS "win_storage_free" WHERE index="<name of your metrics index>" host="<names of the hosts you want to check>" instance="<names of drives you want to check>" instance!="_Total" BY host, instance span=1m | eval storage_used_percent=round(100-win_storage_free,2) | eval host_dev=printf("%s:%s\\",host,instance) | timechart max(storage_used_percent) AS storage_used_percent BY host_dev Would appreciate some help and guidance. Thank you in advance! 
How can we ensure that the HTTP Event Collector works correctly? without dropping connections on the HEC endpoint, solid flow of data, batching that is implemented correctly etc. What are the best ... See more...
How can we ensure that the HTTP Event Collector works correctly? without dropping connections on the HEC endpoint, solid flow of data, batching that is implemented correctly etc. What are the best practices around it?
We are quite close to reach the license limit, data wise, about 2 TBs off the 20 or so TBs allowed. What can we do to insure that we don't breach the license limit?
I have this search query which will return a single row of data- index=xyz | search accountID="1234" instanceName="abcd1" | table curr_x, curr_y, curr_z, op1_x, op1_x, op1_z, op2_x, op2_y, op2_z,... See more...
I have this search query which will return a single row of data- index=xyz | search accountID="1234" instanceName="abcd1" | table curr_x, curr_y, curr_z, op1_x, op1_x, op1_z, op2_x, op2_y, op2_z, op3_x, op3_y, op3_z | fields - accouintID, instanceName and I want to display the resultant row of data in a matrix format like - Option x y z current curr_x curr_y curr_z option_1 op1_x op1_x op1_z option_2 op2_x op2_y op2_z option_3 op3_x op3_y op3_z Please note: Field names are indicative, actual values of the respective fields to be displayed. Assumption : There will always be only one row for a selected accountID and instanceName   Can someone please help me by letting know how this can be achieved?
Hi, Trying to correlate failed logon attempts (event 4776) with the IIS OWA logs, I realized that the OWA logs are in UTC by default and I am in CEST time (Madrid). According to the official docu... See more...
Hi, Trying to correlate failed logon attempts (event 4776) with the IIS OWA logs, I realized that the OWA logs are in UTC by default and I am in CEST time (Madrid). According to the official documentation    To configure time zone settings, edit the props.conf file in $FORWARDER_HOME/etc/system/local/ or in your own custom application directory in $FORWARDER_HOME/etc/apps/.   https://docs.splunk.com/Documentation/Splunk/8.2.5/Data/Applytimezoneoffsetstotimestamps I deployed several apps in the exchange server but onle one app is reporting wrongly , called TA-Windows-Exchange-IIS. So I only need to change the timezone in that specific app if I understood correctly. And this is what I did, creating the file props.conf in the local path of the app. C:\Program Files\SplunkUniversalForwarder\etc\apps\TA-Windows-Exchange-IIS\local   [monitor://C:\inetpub\logs\LogFiles\W3SVC1\*.log] TZ = UTC [monitor://E:\Program Files\Microsoft\Exchange Server\V15\Logging\Ews] TZ = UTC      I restarted the splunkforwarder service just in case. The result is that the time is still wrongly taken from those exchange events, in UTC. Any idea on what I am doing wrong? thanks a lot.
Hi All, I am facing an issue related to time zone interpretation, one server which is configured with CET and sending log splunk cloud (in my best knowledge indexers are placed in GMT timezone). Th... See more...
Hi All, I am facing an issue related to time zone interpretation, one server which is configured with CET and sending log splunk cloud (in my best knowledge indexers are placed in GMT timezone). This server sends syslogs to SC4S servers configured with GMT time zone. Event Time value in splunk is being picked as per the raw event time. Since splunk indexers are GMT, SC4S is in GMT, I am getting time difference between event time (server time/ CET time zone) and index time (GMT time zone). please help, how can I resolve this issue of huge time difference in event time and index time.   Thanks, Bhaskar