All Topics

Top

All Topics

Hi everyone, i have a logs vpn format  2023-06-21T03:29:16+0000 [stdout#info] LOG ERR: 'LOG_DB RECORD {"username": "duocnv", "common_name": "duocnv", "start_time": 1687312988, "session_id": "aa2d4wW... See more...
Hi everyone, i have a logs vpn format  2023-06-21T03:29:16+0000 [stdout#info] LOG ERR: 'LOG_DB RECORD {"username": "duocnv", "common_name": "duocnv", "start_time": 1687312988, "session_id": "aa2d4wW6GaPydjA4", "service": "VPN", "proto": "UDP", "port": "1194", "active": 1, "auth": 1, "version": "3.6.7", "gui_version": "OCmacOS_3.4.2-4547", "platform": "mac", "bytes_in": 1448266, "bytes_out": 15124146, "bytes_total": 16572412, "vpn_ip": "172.27.20.2", "duration": 5168, "node": "ip-10-250-101-154.ap-southeast-1.compute.internal", "timestamp": 1687318156}' i used rex to extract field "vpn_ip" : index=openvpnas | rex field=_raw ".*\s"vpn_ip":\s*"(?<vpn_ip>[^"]+)" But it show error  Error in 'SearchParser': Missing a search command before '^'. Error at position '96' of search query 'search index=openvpnas | search LOG_DB RECORD | re...{snipped} {errorcontext = ublic_ip>[^"]+)"}'. Can anyone help me
Hi, Login id: [readacted] This is Ashok from [redacted] Company. I have an Admin role in AppDynamics and I'm able to log in 'https://www.appdynamics.com/'  but I'm unable to submit the support ti... See more...
Hi, Login id: [readacted] This is Ashok from [redacted] Company. I have an Admin role in AppDynamics and I'm able to log in 'https://www.appdynamics.com/'  but I'm unable to submit the support ticket. May I know how can I reach out to support to tell them about this concern? Thanks Ashok *Posted edited by @Ryan.Paredez to redact member email and company name. Please be careful when and how you share your companies name and email in community posts.
Hi All i have an unified group(i.e office365 unified group) created from Office365.  i want to know membership details i.e who has added/removed users to this group. This group will also be visible... See more...
Hi All i have an unified group(i.e office365 unified group) created from Office365.  i want to know membership details i.e who has added/removed users to this group. This group will also be visible in Azure AD. i can check audit logs in Azure AD and it shows only for a month. i am trying below splunk query to fetch membership information from both Azure AD and office365 but i am not getting output. ug@contoso.com is my group  name     sourcetype=azure*:management:activity (Operation="*Change user*" OR Operation="*Update user*") ObjectId="*ug@contoso.com*" (UserId!="Certificate" AND UserId!="ServicePrincipal*" AND UserId!="Sync*") (ModifiedProperties{}.NewValue!=" " AND ModifiedProperties{}.OldValue!=" ") | rename ModifiedProperties{}.NewValue AS ModAdd | rename ModifiedProperties{}.OldValue AS ModRem | rename UserId AS "Actioned By" | rename Operation AS "Action" | rename ObjectId AS "Member" | sort -_time | table _time, ModAdd, ModRem, "Action", Member, "Actioned By"            
Hello, I need to find out which dashboards across all applications aren't being used. Is there a way to do this through Splunk? Thanks.
I have below Splunk query which calculates SLI  but I need to create the alert to support group if the SLI values falls below 95 can someone please help me with that?  SLI I am calculating based on... See more...
I have below Splunk query which calculates SLI  but I need to create the alert to support group if the SLI values falls below 95 can someone please help me with that?  SLI I am calculating based on events and how can I generate the alerts when I am tiring this I am not getting Alerts option on to Splunk, appreciate help on this (index=idx_re2eeur0_v5 host=mpllnx0432 EVENT_GROUP="SHIPMENT" SOURCE_SYSTEM="IIB" TARGET_SYSTEM="GGX" EVENT_MSG="Send a ZLIDCTR*" COMPONENTNAME="RNATLL05") OR (index=idx_re2eeur0_v5 host=* EVENT_GROUP="SHIPPED" SOURCE_SYSTEM="WMB" TARGET_SYSTEM="SDS" EVENT_MSG="Tech Ack OK received*" COMPONENTNAME="RNATLL05") | rex field=NATIVEID "...\S...\S(?<DeliveryID>\d+)\/" | rex field=_raw "\"nativeID\":\"(?<DeliveryID>\d+)\S" | transaction DeliveryID startswith="Send a ZLIDCTR*" endswith="Tech Ack OK received*" | stats count as valid_events count(eval(duration<180)) as good_events avg(duration) as averageDuration | eval sli=round((good_events/valid_events) * 100, 2) | stats count | where sli < 95
Hey everyone, I have a couple databases that are being ingested through DB Connect that have an excessive amount of fields, 300+. Some of these fields are not needed, either null or =" ", and was w... See more...
Hey everyone, I have a couple databases that are being ingested through DB Connect that have an excessive amount of fields, 300+. Some of these fields are not needed, either null or =" ", and was wondering if there is a way to exclude these before or during ingestion.  Hope everyone has an awesome day!
Hi folks, I created a correlation search that looks for administrators setting passwords to never expire, which then creates a notable event for incident review. I tried setting the severity to bot... See more...
Hi folks, I created a correlation search that looks for administrators setting passwords to never expire, which then creates a notable event for incident review. I tried setting the severity to both "high" and "critical", but when the notable is created the urgency field shows up only as "informational". When I test the rule, I did it against on accounts that show up as both "high" and "critical" priority in the Identity Investigator, data I enrich via Active Directory. I checked the lookup table for urgency_lookup and it is as you would expect, nothing is different than the default that would make it calculate to informational. What may I be missing?   Thanks!
First, the good news! Splunk offers more than a dozen certification options so you can deepen your knowledge and grow your career potential. Splunk Certifications are designed for different areas of ... See more...
First, the good news! Splunk offers more than a dozen certification options so you can deepen your knowledge and grow your career potential. Splunk Certifications are designed for different areas of expertise, from observability to security, from users to administrators.   Now for the not-so-good news. Splunk has made the decision to sunset the Splunk Certified Developer certification, which means the certification exam will no longer be available after September 30, 2023. If you currently hold the certification/badge, it will remain valid until its current expiration date. We know that maintaining your Splunk certifications are important to you, so consider recertifying before the EOL to extend the validity of your certification for another three years.    If you have already started or attempted the Splunk Certified Developer certification exam but have not yet passed, we encourage you to complete the certification exam and earn your badge before it’s retired on September 30, 2023.    Pssst…If you are heading to .conf23 in Las Vegas, you can take this certification and dozens of others for the low, low price of just $25.
Hello  I am planning to migrate Splunk Enterprise from a physical server(RHEL7) to a VM(RHEL8).  On the new VM, I already installed the latest version of Splunk Enterprise ( 9.0.5).  The old instan... See more...
Hello  I am planning to migrate Splunk Enterprise from a physical server(RHEL7) to a VM(RHEL8).  On the new VM, I already installed the latest version of Splunk Enterprise ( 9.0.5).  The old instance Splunk enterprise version is 8.0.2. What are the steps to perform this migration?  Will I run into conflicts if I will jump versions since it's not in place upgrade?  I have checked the following  I could not find anything related physically to virtual migration documentation https://docs.splunk.com/Documentation/Splunk/latest/Installation/MigrateaSplunkinstance Should I  move $SPLUNK_HOME and then proceed with the new Splunk install? o  
Greetings, During model initialization, we need a json file of param. For example, this is in the ECOD notebook       model = init(df,param) print(model)       Could anyone show me ... See more...
Greetings, During model initialization, we need a json file of param. For example, this is in the ECOD notebook       model = init(df,param) print(model)       Could anyone show me the documentation or how to get/generate the param json file? Please note that different model types will have different parameters. Thanks!
I was tasked with tracking the usage and cleanup of lookups for my envrionment and was wondering does splunk create an audit item or item within internal logs to track when a search produces a knowle... See more...
I was tasked with tracking the usage and cleanup of lookups for my envrionment and was wondering does splunk create an audit item or item within internal logs to track when a search produces a knowledge item such as a lookup to log the user who created the item plus the original search? Currently these lookups are owned by nobody and if the search was not saved then the search becomes lost if i cannot search through history to find any search that references the outputlookup command.
How to increase font size for a tables title in dashboard studio. The title for each table are small and I am wanting to display on a bigger monitor which is hard to see. 
I have correlation search creating notable event. the name correlation search testemail.  I want  to notifying all the mail address in user_email field email address. and I want go from my specified ... See more...
I have correlation search creating notable event. the name correlation search testemail.  I want  to notifying all the mail address in user_email field email address. and I want go from my specified mailbox I create for such alert.  how do I configure that. I am not looking change in Mail Server Settings and Send emails as. 
We are seeing the following error after our ES upgrade from 8.2.10 -> 9.0.5 and Splunk_TA_aws upgrade from 6.1.0 -> 7.0.0 . ERROR AwsCredentials [22912 SplunkdSpecificInitThread] - S3ClientProps Ia... See more...
We are seeing the following error after our ES upgrade from 8.2.10 -> 9.0.5 and Splunk_TA_aws upgrade from 6.1.0 -> 7.0.0 . ERROR AwsCredentials [22912 SplunkdSpecificInitThread] - S3ClientProps IamServiceAccountAwsCredentials error failed to open identity token file AWS_WEB_IDENTITY_TOKEN_FILE error=No such file or directory It appears to only occur one time after each restart. This error appears to be related to the Splunk_TA_aws app. Is this related to a new functionality? Are there new configuration settings that we missed? Current Versions: Splunk Enterprise: Version: 9.0.5 Build: e9494146ae5c Splunk_TA_aws: Version: 7.0.0
I have a log file that Splunk is monitoring.  The problem is, I think, that a custom python script runs and outputs the results at one time to the log file.  The forwarder it taking the entire entry ... See more...
I have a log file that Splunk is monitoring.  The problem is, I think, that a custom python script runs and outputs the results at one time to the log file.  The forwarder it taking the entire entry from the script as one event, but I need each line to be an event.   How do I configure the forwarder to parse the output to the log file?  Here is what I have configured: input.conf: [monitor://D:\Tools\DailyChecks\Reports\Actionable_report_output_PROD.txt] index=test_7d sourcetype=Ibm:BigFix:DailyChecks disabled=0 props.conf: [Ibm:BigFix:DailyChecks] EVENT_BREAKER_ENABLE=false EVENT_BREAKER=([\r\n]+) SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=false CHARSET=UTF-8 MAX_TIMESTAMP_LOOKAHEAD=30 # disabled=false TZ=UTC Sample of the log file: -------------Report for-----------PROD DETAIL Take Action==> Number of encryption certificates of bes license: [0] FAIL Take Action==> 1.7.6: Actionsite Size Check Actionsite Size Check FAIL Take Action==> ActionSite Size is too large: ['63733 KB'] DETAIL Take Action==> Total Stopped/Expired Action count (more than 30 days old): [['Total: 96', 'Single Top-Level:4', 'Baseline Component: 92']] FAIL Take Action==> 1.10.5: Duplicate Computers (by Computer Name) Check for Duplicated Computers FAIL Take Action==> There are at least 100 duplicates of the following computers: ['PL-MTL-P-151', 'PL-MTL-P-41', 'SIMICS-MACHINE', 'SIMICSLESS-VM', 'localhost.localdomain', 'simics-vm061', 'simics-vm062', 'simics-vm063', 'simics-vm064', 'simics-vm065', 'simics-vm066', 'simics-vm067']
It appears that using now() inside of the map command will always return the time that the map was started rather than the time for each loop. The below SPL shows an example of this. Does anyone have... See more...
It appears that using now() inside of the map command will always return the time that the map was started rather than the time for each loop. The below SPL shows an example of this. Does anyone have any thoughts on how to get the time for each iteration of the loop?     | makeresults count=100 | map maxsearches=100 search="| makeresults count=1 | eval outer_time=$_time$ | eval outer_time_formatted=strftime($_time$, \"%Y-%m-%d %H:%M:%S\") | eval now=now()" | table outer_time_formatted outer_time _time now    
We have an issue with pan:threat in our dev environment having fields that end like this \”, What this does is escape the “ so the field isn’t closed, and it grabs extra.  For example, An example e... See more...
We have an issue with pan:threat in our dev environment having fields that end like this \”, What this does is escape the “ so the field isn’t closed, and it grabs extra.  For example, An example event Jun 20 09:45:17 pan_firewall 1,2023/06/20 09:45:17,016201006029,THREAT,url,2561,2023/06/20 09:45:17,10.10.10.10,11.11.11.11,12.12.12.12,13.13.13.13,Internal-Gateway-Client-Connect,,,web-browsing,vsys1,inside,inside,ethernet1/2,ethernet1/2,Shared_Log_Fwd,2023/06/20 09:45:17,633045,1,55384,443,55384,20077,0x140b000,tcp,alert,"pan_firewall/default.asp\",(9999),PAN-Allowed-Sites,informational,client-to-server,7237130175635929631,0x8000000000000000,10.0.0.0-10.255.255.255,United States,,,0,,,1,,,,,,,,0,29,50,52,0,vsys1,pan_firewall,,,,get,0,,0,,N/A,unknown,AppThreat-0-0,0x0,0,4294967295,," PAN-Allowed-Sites,health-and-medicine,low-risk",27b923e5-b821-4544-8790-5eb413f7ed4a,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0,2023-06-20T09:45:17.691+00:00,,,,internet-utility,general-internet,browser-based,4,"used-by-malware,able-to-transfer-file,has-known-vulnerability,tunnel-other-application,pervasive-use",,web-browsing,no,no url = ,"pan_firewall/default.asp\",(9999),PAN-Allowed-Sites,informational,client-to-server,7237130175635929631,0x8000000000000000,10.0.0.0-10.255.255.255,United States,,,0,,,1,,,,,,,,0,29,50,52,0,vsys1,pan_firewall,,,,get,0,,0,,N/A,unknown,AppThreat-0-0,0x0,0,4294967295,," PAN-Allowed-Sites category = low-risk",27b923e5-b821-4544-8790-5eb413f7ed4a,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0,2023-06-20T09:45:17.691+00:00,,,,internet-utility,general-internet,browser-based,4,"used-by-malware I’ve tried multiple SEDCMD to change the \”, so it is something else, but even though it is in btool the events still have the \", /data/splunk/hot/apps/splunk/etc/apps/Splunk_TA_paloalto/default/props.conf                  SEDCMD-palo_alto_remove_backslah = s/\\\",/\\ \",/g I did see the recommendation to send to an HF, but this data arrives via syslog and then goes to the indexers.  The regex works in the various tools I've tried.  Data is somewhat anonymized.  Any suggestions? TIA, Joe
I created a custom app and used the built in template that adds a bunch of generic reports and dashboards. I am unable to delete any of the saved reports that came default with the template. I am get... See more...
I created a custom app and used the built in template that adds a bunch of generic reports and dashboards. I am unable to delete any of the saved reports that came default with the template. I am getting this error:        This saved search failed to handle removal request due to Object id=Histogram of delay in seconds cannot be deleted in config=savedsearches.         Any ideas how I can delete these? I am getting duplicate errors on our ES SH because of using this template multiple times. I have tried disabling the reports, but it still doesn't work. I have tried to delete from the Settings > "Searches, Reports, and Alerts" as well as within the app and neither worked.  
I have a dbx query plus SPL commands that makes me a certain table, which I want to refer to via a table name, is it possible?   The present table needs some new columns and that's what the above... See more...
I have a dbx query plus SPL commands that makes me a certain table, which I want to refer to via a table name, is it possible?   The present table needs some new columns and that's what the above query adds, but the schema for my final table for my bar chart is a little different from the source table and hence I can't build on top of the above query using |, or I don't know how. Hence I was wondering if I can just use this via a table name.
Hi everyone,  I am trying to redirect logs from Splunk Enterprise locally installed to OpenSearch Logstash or to Aws Kinesis.  I am quite new to Splunk so it brings troubles in getting the idea... See more...
Hi everyone,  I am trying to redirect logs from Splunk Enterprise locally installed to OpenSearch Logstash or to Aws Kinesis.  I am quite new to Splunk so it brings troubles in getting the idea how it can be achieved.  Am I right ? :  1. We can use heavy forwarder so the output can be send to Logstash host and port ?  2. No way to stream data to Aws Kinesis even with add-ons nither CLI nor Web Splunk.  Probably it is possible to use splunk sdk / api for getting stream of data so it can be programmatically transferred to Kinesis by aws jdk.  I hope someone has some experience with it.  Thanks !