All Topics

Top

All Topics

If you're working with Cisco AppDynamics Smart Agent and need a simple way to host your installation files, Python offers a built-in HTTP server to get you up and running in minutes. This lightweight... See more...
If you're working with Cisco AppDynamics Smart Agent and need a simple way to host your installation files, Python offers a built-in HTTP server to get you up and running in minutes. This lightweight web server is ideal for quickly sharing files within your network without the hassle of configuring a full-blown web server like Apache or Nginx. In this guide, we'll walk through the steps to set up a web server using Python 3.11 to host your Smart Agent installation files. Why Use Python’s HTTP Server? The http.server module in Python allows you to serve files over HTTP directly from your file system. It's a great tool for: Quick file sharing: No installation or configuration of additional software is required. Lightweight: Perfect for small-scale local development or testing environments. Cross-platform: Works on any system where Python is installed (Linux, Windows, macOS). Prerequisites Python 3.11: Ensure that you have Python 3.11 installed on your system. You can check by running: bash python3.11 --version If you don't have Python 3.11 installed, you can download it from the official Python website. Cisco AppDynamics Smart Agent installation files: These files should be available in the directory you plan to host. Typically, these will be .deb or .rpm files such as appdsmartagent_<architecture>_<platform>_<version>.deb or appdsmartagent_<architecture>_<platform>_<version>.rpm. Step-by-Step Guide 1. Organize Your Files First, create a directory on your system to store the Smart Agent installation files. For this guide, let's assume you create a directory named appd-agent-files. bash mkdir ~/appd-agent-files  Next, move the installation files into this directory. These could be .deb or .rpm files depending on your deployment platform. bash mv appdsmartagent_* ~/appd-agent-files 2. Start the Python HTTP Server Navigate to the directory where your installation files are located and run the following command to start the Python HTTP server on port 8000: bash cd ~/appd-agent-files python3.11 -m http.server 8000 This will start an HTTP server that serves the files in the current directory at http://<your-server-ip>:8000. Replace <your-server-ip> with the actual IP address or hostname of the machine running the server. 3. Access the Web Server Once the server is running, you can access the hosted files by opening a web browser or using a tool like curl or wget to download the files. For example, to download a file named appdsmartagent_x86_64_debian_21.10.deb, you can run: bash wget http://<your-server-ip>:8000/appdsmartagent_x86_64_debian_21.10.deb This will download the Smart Agent installation file to your local machine. You can also no navigate to the hosts IP address from a web browser.  Security Considerations Python’s HTTP server is easy to set up but lacks advanced security features like SSL/TLS, user authentication, or access controls. It is best suited for internal or development environments. For production deployments, consider more secure options such as Nginx or Apache. Additionally, always be mindful of your organization's security policy and posture to ensure that using a lightweight solution like this aligns with internal security guidelines. Safe Use Cases for Python’s HTTP Server While the Python HTTP server is lightweight and intended for short-term use, it works well in the following scenarios: Local Development and Testing: Ideal for quickly sharing files or testing deployments in isolated, controlled environments, such as hosting Cisco AppDynamics Smart Agent files on a local machine or test server. Short-Term File Sharing: Suitable for temporary hosting during specific tasks like setup or testing. Simply stop the server with Ctrl + C when done. Internal Networks: Safe to use within secure internal networks where access is restricted and traffic is monitored by tools like Cisco’s ThousandEyes or AppDynamics. Always ensure that using this method fits within your organization’s security posture and policies.
I'm trying to implement the Splunk Machine Learning Toolkit Query, found here: https://github.com/splunk/security_content/blob/develop/detections/cloud/abnormally_high_number_of_cloud_security_group_... See more...
I'm trying to implement the Splunk Machine Learning Toolkit Query, found here: https://github.com/splunk/security_content/blob/develop/detections/cloud/abnormally_high_number_of_cloud_security_group_api_calls.yml Actually just the first part: | tstats count as all_changes from datamodel=Change_test where All_Changes.object_category=* All_Changes.status=* by All_Changes.object_category All_Changes.status All_Changes.user But I'm getting this error   How do I fix this?
i have this on other panels but cant get it on a stacked column chart   | streamstats current=f last(Timestamp) as HaltedCycleLastTime by Cycle | eval HaltedCycleSecondsHalted=round(HaltedCycleLa... See more...
i have this on other panels but cant get it on a stacked column chart   | streamstats current=f last(Timestamp) as HaltedCycleLastTime by Cycle | eval HaltedCycleSecondsHalted=round(HaltedCycleLastTime - Timestamp,0) | eval HaltedCycleSecondsHalted=if(HaltedCycleSecondsHalted < 20,HaltedCycleSecondsHalted,0) | streamstats time_window=30d sum(HaltedCycleSecondsHalted) as HaltedCycleSecondsPerDayMA | eval HaltedCycleSecondsPerDayMA=round(HaltedCycleSecondsPerDayMA,0) | chart sum(HaltedCycleSecondsHalted) as HaltedSecondsPerDayPerCycle by CycleDate Cycle limit=0 this produces a stacked column based on the chart command , but in dashboard studio i expect to see HaltedCycleSecondsPerDayMA as a pickable field and i dont. I added to code as overlayfields but still not showing.
On Browser Tests we have Auto-retry enabled and when a Test fails, Auto-retry kicks in and updates results,  so on the browser test Page Availability section clicking somewhere could see this flyout... See more...
On Browser Tests we have Auto-retry enabled and when a Test fails, Auto-retry kicks in and updates results,  so on the browser test Page Availability section clicking somewhere could see this flyout "Multiple runs found for Uptime",  How to view this section? (Having hard time )     
We have some tokens that are due to expire shortly. Q1: Does the 'Default' token automatically rotate? Q2: How do you manually rotate a token using the dashboard? (I am aware of the API option) Q3... See more...
We have some tokens that are due to expire shortly. Q1: Does the 'Default' token automatically rotate? Q2: How do you manually rotate a token using the dashboard? (I am aware of the API option) Q3: If the API call is the only option, what permissions are required to make the 'rotate' API call? Thanks in anticipation. Ian
Hi  Can someone please tell me how we can compare the value of a particular day with the value of the same day of last week and create a new field as deviation.  Example :  Below command generates... See more...
Hi  Can someone please tell me how we can compare the value of a particular day with the value of the same day of last week and create a new field as deviation.  Example :  Below command generates the output as below :  | stats sum(Number_Events) as TOTAL by FIeld1 FIeld2  FIeld3 Day  Time Week_of_year Total We need the output like below :  1. In tabular form : Is it possible to have an output like below :  2. If point 1 is possible to be created , then Is it possible to have a time-chart with 3 lines over the 24 hours of the day . Example of data for 3 hours is attached  1 line corresponds to Week of year -2 (39) 2nd line corresponds to Week of year -1 (40) 3rd line corresponds to Week of year (41)   Thanks in advance to help me out.   
In September, the Splunk Threat Research Team had two releases of new security content via the Enterprise Security Content Update (ESCU) app (v4.40.0 and v4.41.0). With these releases, there are 58 n... See more...
In September, the Splunk Threat Research Team had two releases of new security content via the Enterprise Security Content Update (ESCU) app (v4.40.0 and v4.41.0). With these releases, there are 58 new analytics, 4 new analytic stories, and 81 updated analytics now available in Splunk Enterprise Security via the ESCU application update process. Content highlights include: The new analytic story “Compromised Linux Host” introduces a robust set of 50 detections for compromised Linux hosts, covering a wide range of activities such as unauthorized account creation, file ownership changes, kernel module modifications, privilege escalation, data destruction, and suspicious service stoppages, enhancing visibility into potential malicious actions and system tampering. We have tagged existing analytics related to Black Suit ransomware TTPs into a new “BlackSuit Ransomware” analytic story, providing organizations with targeted threat detection capabilities to identify and mitigate ransomware attacks before they can cause significant damage. The ValleyRAT analytic story includes new detections tailored to the ValleyRAT malware, providing enhanced monitoring and threat-hunting capabilities for adversarial activity on Windows systems. These detections improve visibility into malicious registry changes, task scheduling anomalies, and suspicious executable behavior. New Analytics (58) Linux Auditd Add User Account Type Linux Auditd Add User Account Linux Auditd At Application Execution Linux Auditd Auditd Service Stop Linux Auditd Base64 Decode Files Linux Auditd Change File Owner To Root Linux Auditd Clipboard Data Copy Linux Auditd Data Destruction Command Linux Auditd Data Transfer Size Limits Via Split Syscall Linux Auditd Data Transfer Size Limits Via Split Linux Auditd Database File And Directory Discovery Linux Auditd Dd File Overwrite Linux Auditd Disable Or Modify System Firewall Linux Auditd Doas Conf File Creation Linux Auditd Doas Tool Execution Linux Auditd Edit Cron Table Parameter Linux Auditd File And Directory Discovery Linux Auditd File Permission Modification Via Chmod Linux Auditd File Permissions Modification Via Chattr Linux Auditd Find Credentials From Password Managers Linux Auditd Find Credentials From Password Stores Linux Auditd Find Private Keys Linux Auditd Find Ssh Private Keys Linux Auditd Hardware Addition Swapoff Linux Auditd Hidden Files And Directories Creation Linux Auditd Insert Kernel Module Using Insmod Utility Auditd Install Kernel Module Using Modprobe Utility Linux Auditd Kernel Module Enumeration Linux Auditd Kernel Module Using Rmmod Utility Linux Auditd Nopasswd Entry In Sudoers File Linux Auditd Osquery Service Stop Linux Auditd Possible Access Or Modification Of Sshd Config File Linux Auditd Possible Access To Credential Files Linux Auditd Possible Access To Sudoers File Linux Auditd Possible Append Cronjob Entry On Existing Cronjob File Linux Auditd Preload Hijack Library Calls Linux Auditd Preload Hijack Via Preload File Linux Auditd Service Restarted Linux Auditd Service Started Linux Auditd Setuid Using Chmod Utility Linux Auditd Setuid Using Setcap Utility Linux Auditd Shred Overwrite Command Linux Auditd Stop Services Linux Auditd Sudo Or Su Execution Linux Auditd Sysmon Service Stop Linux Auditd System Network Configuration Discovery Linux Auditd Unix Shell Configuration Modification Linux Auditd Unload Module Via Modprobe Linux Auditd Virtual Disk File And Directory Discovery Linux Auditd Whoami User Discovery Windows DISM Install PowerShell Web Access Windows Enable PowerShell Web Access Windows Impair Defenses Disable AV AutoStart via Registry Windows Modify Registry Utilize ProgIDs Windows Modify Registry ValleyRAT C2 Config Windows Modify Registry ValleyRat PWN Reg Entry Windows Schedule Task DLL Module Loaded Windows Schedule Tasks for CompMgmtLauncher or Eventvwr New Analytic Stories (4) BlackSuit Ransomware CISA AA24-241A Compromised Linux Host ValleyRAT Updated Analytics (81) ASL AWS Concurrent Sessions From Different Ips Access to Vulnerable Ivanti Connect Secure Bookmark Endpoint Anomalous usage of 7zip Citrix ADC Exploitation CVE-2023-3519 Create Remote Thread into LSASS Create local admin accounts using net exe Detect Credential Dumping through LSASS access Detect New Local Admin account Detect Remote Access Software Usage DNS Detect Remote Access Software Usage File Detect Remote Access Software Usage Process Detect Remote Access Software Usage URL Detect SharpHound Command-Line Arguments Detect SharpHound File Modifications Disable Defender AntiVirus Registry Disabled Kerberos Pre-Authentication Discovery With Get-ADUser Domain Controller Discovery with Nltest Elevated Group Discovery With Net Excessive Usage Of Taskkill Executable File Written in Administrative SMB Share F5 BIG-IP iControl REST Vulnerability CVE-2022-1388 Ivanti Connect Secure Command Injection Attempts Ivanti Connect Secure System Information Access via Auth Bypass Kerberos Pre-Authentication Flag Disabled in UserAccountControl Kubernetes Abuse of Secret by Unusual Location Kubernetes Abuse of Secret by Unusual User Agent Kubernetes Abuse of Secret by Unusual User Group Kubernetes Abuse of Secret by Unusual User Name Kubernetes Access Scanning Kubernetes Create or Update Privileged Pod Kubernetes Cron Job Creation Kubernetes DaemonSet Deployed Kubernetes Falco Shell Spawned Kubernetes Node Port Creation Kubernetes Pod Created in Default Namespace Kubernetes Pod With Host Network Attachment Kubernetes Scanning by Unauthenticated IP Address Kubernetes Suspicious Image Pulling Kubernetes Unauthorized Access Ngrok Reverse Proxy on Network PowerShell 4104 Hunting Powershell Disable Security Monitoring Registry Keys Used For Persistence Rubeus Command Line Parameters Rubeus Kerberos Ticket Exports Through Winlogon Access Rundll32 with no Command Line Arguments with Network Scheduled Task Deleted Or Created via CMD Suspicious Scheduled Task from Public Directory System Information Discovery Detection Unknown Process Using The Kerberos Protocol WinEvent Windows Task Scheduler Event Action Started Windows AD Abnormal Object Access Activity Windows AD Privileged Object Access Activity Windows Abused Web Services Windows AdFind Exe Windows Alternate DataStream - Base64 Content Windows Alternate DataStream - Executable Content Windows Alternate DataStream - Process Execution Windows Create Local Account Windows Disable or Modify Tools Via Taskkill Windows Driver Load Non-Standard Path Windows Modify Registry Delete Firewall Rules Windows Modify Registry to Add or Modify Firewall Rule Windows Ngrok Reverse Proxy Usage Windows Privilege Escalation Suspicious Process Elevation Windows Privilege Escalation System Process Without System Parent Windows Privilege Escalation User Process Spawn System Process Windows Remote Create Service Windows Remote Services Rdp Enable Windows UAC Bypass Suspicious Child Process Windows UAC Bypass Suspicious Escalation Behavior Wsmprovhost LOLBAS Execution Process Spawn Add or Set Windows Defender Exclusion CMLUA Or CMSTPLUA UAC Bypass Eventvwr UAC Bypass Executables Or Script Creation In Suspicious Path FodHelper UAC Bypass Suspicious Process File Path WinEvent Windows Task Scheduler Event Action Started Windows Access Token Manipulation SeDebugPrivilege Windows Defender Exclusion Registry Entry The team also published the following 4 blogs: Splunk Security Content for Threat Detection & Response: Q2 Roundup The Final Shell: Introducing ShellSweepX ShrinkLocker Malware: Abusing BitLocker to Lock Your Data Handala’s Wiper: Threat Analysis and Detections For all our tools and security content, please visit research.splunk.com. — The Splunk Threat Research Team
Hello, I'm figuring out the best way to address the above situation. We do have a huge multisite cluster with 10 indexers on each site, a dedicated instance should act as the sc4s instance and send ... See more...
Hello, I'm figuring out the best way to address the above situation. We do have a huge multisite cluster with 10 indexers on each site, a dedicated instance should act as the sc4s instance and send everything to a load balancer whose job will be to forward everything to the cluster.  now, there are several documentations about the implementation but I still can't wrap my head around the direct approach.  the SC4S config stanza would currently look something like this :   [http://SC4S] disabled = 0 source = sc4s sourcetype = sc4s:fallback index = main indexes = main, _metrics, firewall, proxy persistentQueueSize = 10MB queueSize = 5MB token = XXXXXX     several questions about that tho: - I'd need to create a hec token first, before configuring SC4S, but in a clustered environment - where do I create the hec token? I've read that I should create it on the CM and then push it to the peers but how exactly? I can't find much info about the specifics. especially since I try to configure it via config files.. so an example of the correct stanza that has to be pushed out would be somehow great - just can't find any.  - once pushed I need to configure the sc4s on the other side including the generated token (as seen above), does the config here seem correct? theres a lack of example configs so I'm spitballing here a little bit.   Kind regards
I am trying to use the credentials of my friend to log into Splunk Enterprise, and I am unable to do that.  Also, I am using ODBC to connect Splunk with Power BI, and when I do that locally, I am ab... See more...
I am trying to use the credentials of my friend to log into Splunk Enterprise, and I am unable to do that.  Also, I am using ODBC to connect Splunk with Power BI, and when I do that locally, I am able to do that, but when I am trying to do that remotely, I am unable to do that. I am having issues with server URL and port number. Any help would be appreciated to solve these queries. TIA.
Hi splunkers !   I got a question about memory.    In my splunk monitoring console, I get approx 90% of memory used by splunk processes. The amount of memory is 48 Gb In my VCenter, I can see th... See more...
Hi splunkers !   I got a question about memory.    In my splunk monitoring console, I get approx 90% of memory used by splunk processes. The amount of memory is 48 Gb In my VCenter, I can see that only half of the assigned memory is used (approx 24 Gb over 48Gb available).   Who is telling me the truth : Splunk monitoring or Vcenter. And overall, is there somthing to configure in Splunk to fit the entire available memory.   Splunk 9.2.2 / redhat 7.8 Thank you .   Olivier.
i have created a stacked bar based on a data source (query) and everything works with the exception of: i have to select each data value to display when the query runs through Data Configuration - Y... See more...
i have created a stacked bar based on a data source (query) and everything works with the exception of: i have to select each data value to display when the query runs through Data Configuration - Y meaning all of my desired values show up there but they are not "selected" by default so the chart is blank until i select them?
My query is    index=stuff | search "kubernetes.labels.app"="some_stuff" "log.msg"="Response" "log.level"=30 "log.response.statusCode"=200 | spath "log.request.path"| rename "log.request.path" as u... See more...
My query is    index=stuff | search "kubernetes.labels.app"="some_stuff" "log.msg"="Response" "log.level"=30 "log.response.statusCode"=200 | spath "log.request.path"| rename "log.request.path" as url | convert timeformat="%Y/%m/%d" ctime(_time) as date | stats min("log.context.duration") as RT_fastest max("log.context.duration") as RT_slowest p95("log.context.duration") as RT_p95 p99("log.context.duration") as RT_p99 avg("log.context.duration") as RT_avg count(url) as Total_Req by url   And i am getting the attached screenshot response. I want to club all the similar api's like all the /getFile/* as one API and get the average time
Hi  I have events that having multiple countries... I want to count the country field and with different time range. It is need to sort by highest country to lowest. EX   Country         Last 24h  ... See more...
Hi  I have events that having multiple countries... I want to count the country field and with different time range. It is need to sort by highest country to lowest. EX   Country         Last 24h     Last 30 days     Last 90 days            US                       10                   50                            100            Aus                       8                     35                              80 I need query kindly assist me.
I have ingested data form influx DB to Splunk Enterprise using influxDB add from splunk db connect. Performing InfluxQL search in SQL explorer of created influx connection. I am getting empty values... See more...
I have ingested data form influx DB to Splunk Enterprise using influxDB add from splunk db connect. Performing InfluxQL search in SQL explorer of created influx connection. I am getting empty values for value column. Query: from(bucket: "buckerName") |> range(start: -6h) |> filter(fn: (r) => r._measurement == "NameOfMeasurement") |>filter(fn: (r) => r._field == "value") |> yield(name: "count")     Splunk DBX Add-on for InfluxDB JDBC 
初歩的な質問で失礼いたします。弊社ではPoCとして、Splunk Enterprise Trial Licenseをご提供いただき、まずはpalo altoのログを取り込んで、(メール等で)アラートを発報させたいと思っていますが、どのようにすればいいかわかりません。(手動で過去のログを取り込むことはできましたが、過去のログに対してアラートは出せないですよね。日付を現在にすれば出るのでしょうか。ま... See more...
初歩的な質問で失礼いたします。弊社ではPoCとして、Splunk Enterprise Trial Licenseをご提供いただき、まずはpalo altoのログを取り込んで、(メール等で)アラートを発報させたいと思っていますが、どのようにすればいいかわかりません。(手動で過去のログを取り込むことはできましたが、過去のログに対してアラートは出せないですよね。日付を現在にすれば出るのでしょうか。また出し方もわかっておりませんが。。) 環境はFJCloudに仮想サーバを1台立てて、そこでSplunkを動かしていますが、他のサーバにForwarderを入れたりなどはしていないです。 どなたかご存知の方、教えていただければ幸甚です。よろしくお願いいたします。
I migrated to v9.1.5 and have the TA-XLS app installed and working from a v7.3.6.  Commanding an 'outputxls' will generate a 'cannot concat str to bytes' error for the following line of the outputxl... See more...
I migrated to v9.1.5 and have the TA-XLS app installed and working from a v7.3.6.  Commanding an 'outputxls' will generate a 'cannot concat str to bytes' error for the following line of the outputxls.py file in the app:  try: csv_to_xls(os.environ['SPLUNK_HOME'] + "/etc/apps/app_name/appserver/static/fileXLS/" + output)Tried encoding by appending  .encode(encode('utf-8') to the string -> not working Tried importing the SIX and FUTURIZE/MODERNIZE libraries and ran the code to "upgrade" the script: it just added the and changed a line --> not working  from __future__ import absolute_import   Tried to define each variable, and some other --> not working  splunk_home = os.environ['SPLUNK_HOME'] static_path = '/etc/apps/app_name/appserver/static/fileXLS/' output_bytes = output csv_to_xls((splunk_home + static_path.encode(encoding='utf-8') + output))   I sort of rely on this app to work, any kind of help is needed! Thanks!            
  Hi Splunk Community, I’ve generated self-signed SSL certificates and configured them in web.conf, but they don't seem to be taking effect. Additionally, I am receiving the following warning messa... See more...
  Hi Splunk Community, I’ve generated self-signed SSL certificates and configured them in web.conf, but they don't seem to be taking effect. Additionally, I am receiving the following warning message when starting Splunk:   WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Could someone please help me resolve this issue? I want to ensure that Splunk uses the correct SSL certificates and that the hostname validation works properly.
Splunk Enterprise Version: 9.2.0.1 OpenShift Version: 4.14.30   We used to have Openshift Event logs coming in under sourcetype openshift_events under index=openshift_generic_logs   However, sta... See more...
Splunk Enterprise Version: 9.2.0.1 OpenShift Version: 4.14.30   We used to have Openshift Event logs coming in under sourcetype openshift_events under index=openshift_generic_logs   However, starting Sept 29, we suddenly did not receive any logs from that index and sourcetype. The Splunkforwarders are still running and we did not do any changes to the configuration. Here is the addon.conf that we have:       004-addon.conf [general] # addons can be run in parallel with agents addon = true [input.kubernetes_events] # disable collecting kubernetes events disabled = false # override type type = openshift_events # specify Splunk index index = # (obsolete, depends on kubernetes timeout) # Set the timeout for how long request to watch events going to hang reading. # eventsWatchTimeout = 30m # (obsolete, depends on kubernetes timeout) # Ignore events last seen later that this duration. # eventsTTL = 12h # set output (splunk or devnull, default is [general]defaultOutput) output = # exclude managed fields from the metadata excludeManagedFields = true [input.kubernetes_watch::pods] # disable events disabled = false # Set the timeout for how often watch request should refresh the whole list refresh = 10m apiVersion = v1 kind = pod namespace = # override type type = openshift_objects # specify Splunk index index = # set output (splunk or devnull, default is [general]defaultOutput) output = # exclude managed fields from the metadata excludeManagedFields = true       Apologies if I'm missing something obvious here.   Thank you!
Hi guys, Does anyone know even with the Trial version of Splunk Observability Cloud whether it still accepts logs being sent to it directly by the Splunk Otel Collector?        According to  this p... See more...
Hi guys, Does anyone know even with the Trial version of Splunk Observability Cloud whether it still accepts logs being sent to it directly by the Splunk Otel Collector?        According to  this page  : https://docs.splunk.com/observability/en/gdi/opentelemetry/components/splunk-hec-exporter.html , it says: "Caution - Splunk Log Observer is no longer available for new users. You can continue to use Log Observer if you already have an entitlement."       As I'm using the Trial version,  I'm just curious to see how Observability Cloud processes logs via fluentd, rather than use Log Observer Connect which uses the Universal Forwarder to send logs to Splunk Cloud/Enterprise first, and then  Observability Cloud  just views log events via the integration.  Seems that Observability Cloud is not showing  the ordinary syslog or windows events which get sent to it  automatically out of the box by the  Splunk Otel Collector. Tried setting up my own log file, but nothing shows up in O11y either.
I have two of the exact same searches and one works within the search app but not this custom internal app that package the savedsearch.   The search works for both apps until the where command is ... See more...
I have two of the exact same searches and one works within the search app but not this custom internal app that package the savedsearch.   The search works for both apps until the where command is introduced.            | eval delta_time = delete_time - create_time, hours=round(delta_time/3600,2)\ | where delta_time < (48 * 3600)\         This returns results in the search app but not in the app that houses this alert. The app is shared globally and all the objects within it. I also have the admin role with no restricted indexes or data.