All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @dataisbeautiful , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @pedropiin , this search hasn't sense, you could run: index=main source=... ... ... | stats count BY name Ciao. Giuseppe
Hi everyone. I have a query that basically filters certain events and sums them by category. But I'm facing issues when dealing with stats sum. The query is of the form   index=main source=..... See more...
Hi everyone. I have a query that basically filters certain events and sums them by category. But I'm facing issues when dealing with stats sum. The query is of the form   index=main source=... ... ... | stats count BY name, ticket | stats sum(count) as numOfTickets by name     Using some test data, removing the last line gives me a table with only one row of the form: " name    | tickets               | count " " name1 | ticket_name1 | 1" (considering the first line as the names of the fields). Whenever I run the last line, that is, "stats sum(count)..." , it returns 0 events.  I've already tried to, for example, redundantly check that count is a numeric value by doing "eval count = tonumber(count)". Why is this happening? Thank you in advance
Hi @avi123  How about this? You can remove the fields as you are doing, then do | tojson   Here is a sample SPL | makeresults | eval _raw=json_extract("{\"BCDA_AB_CD_01\": 1, \"BCAD_AB__02\... See more...
Hi @avi123  How about this? You can remove the fields as you are doing, then do | tojson   Here is a sample SPL | makeresults | eval _raw=json_extract("{\"BCDA_AB_CD_01\": 1, \"BCAD_AB__02\": 0, \"BCDA_AB_DC\": 1, \"BCAD_CD_02\": 0}","") | spath input=_raw | fields - BCAD_CD_02 BCAD_AB__02 | tojson Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Is the new version of Cisco Security Cloud, 3.1.1,  compatible with Splunk Enterprise 9.4? On Splunkbase, it shows the highest compatible version as 9.3. Where as the old version, 3.0.1, was compatib... See more...
Is the new version of Cisco Security Cloud, 3.1.1,  compatible with Splunk Enterprise 9.4? On Splunkbase, it shows the highest compatible version as 9.3. Where as the old version, 3.0.1, was compatible with Splunk Enterprise 9.4.
Thank you very much for your quick response! yes, Need visualization based on timestatus wether it is completed,inprogress and incomplete. index=myindex NUM | where isnull(NXT) | dedup MC | jo... See more...
Thank you very much for your quick response! yes, Need visualization based on timestatus wether it is completed,inprogress and incomplete. index=myindex NUM | where isnull(NXT) | dedup MC | join NUM [search index=myindex3 ID | eval ID = printf("%01d",ID) | rename ID as NUM | stats count by NUM | eval timestatus=case(count >5, "Complete", count == 0, "Incomplete", count > 0 AND count >= 5, "In Progress") ] | search NUM = 1 | stats count AS Total Output is to show only total count. background "NUM" wise  we need to display the colors based on the field "timestatus"  
Firstly, you need a search which delivers the value you want. This search is a bit confusing. You are formatting a time field and then within the same statement parsing the result using exactly to sa... See more...
Firstly, you need a search which delivers the value you want. This search is a bit confusing. You are formatting a time field and then within the same statement parsing the result using exactly to same format string. You may as well evaluate lrm_frmt_time to lrm_time. timestatus is coming from your join but you are ignoring it in your final stats command so it is thrown away. You should try to avoid joins if possible, therefore, I suggest you rewrite the search (or provide a working version), or is that what you are seeking help with, as opposed to how to set the colour on a single visualisation?
index=myindex NUM | where isnull(NXT) | dedup MC | eval lrm_time=[ search index=myindex2 | eventstats min(_time) as min_time | where _time=min_time | table min_time | dedup min_time | return $... See more...
index=myindex NUM | where isnull(NXT) | dedup MC | eval lrm_time=[ search index=myindex2 | eventstats min(_time) as min_time | where _time=min_time | table min_time | dedup min_time | return $min_time ] | eval formatted_time = strptime(AVAIL_TS, "%Y%m%d%H%M%S") | eval lrm_frmt_time = strptime(strftime(lrm_time, "%Y-%m-%d %H:%M:%S"),"%Y-%m-%d %H:%M:%S") | eval final_time = if(formatted_time > lrm_frmt_time, formatted_time, null) | where isnotnull(final_time) | join NUM [search index=myindex3 NUM | eval ID = printf("%01d",ID) | rename ID as NUM | stats count by NUM | eval timestatus=case(count > 5, "Complete", count == 0, "Incomplete", count > 0 AND count >= 5, "In Progress") ] | search NUM = 1 | stats count AS Total Here is the query using output will come count and that value shows using single value. file runs 4 times daily. I will create 4 panels show the NUM =1, 2, 3, 4 count. how i can show the field timestatus output is complete, incomplete and inprogress for each panel color. Thanks in Advance!
@ITWhisperer That's exactly what I was after, thank you.   Thanks also @gcusello @meetmshah 
SRE and DevOps are organisational / cultural paradigms which require buy-in at multiple levels in an organisation; Security does too to some extent but that is easier to "sell", and would probably gi... See more...
SRE and DevOps are organisational / cultural paradigms which require buy-in at multiple levels in an organisation; Security does too to some extent but that is easier to "sell", and would probably give shorter term benefits to your career. Having said that, you could look at the Splunk article on SRE and Golden Signals (https://www.splunk.com/en_us/blog/learn/sre-metrics-four-golden-signals-of-monitoring.html?locale=en_us) and even start building dashboards and alerts to represent these signals in your existing environment to help promote the concepts and value of SRE. Ideally, you need to find an Executive Sponsor for SRE (and indeed DevOps) otherwise it can get rather frustrating!
@splunklearner  To set up a proxy, please contact the Network team, discuss your requirements with them, and proceed accordingly.
@splunklearner  You can create your own proxy server using an EC2 instance. Here are the steps: Launch an EC2 Instance for the Proxy:   Go to the AWS EC2 console. Launch a new instance (e... See more...
@splunklearner  You can create your own proxy server using an EC2 instance. Here are the steps: Launch an EC2 Instance for the Proxy:   Go to the AWS EC2 console. Launch a new instance (e.g., t2.micro for testing) in a public subnet of your VPC. Use an Amazon Linux 2 AMI (or your preferred OS). Assign a public IP address and ensure it’s in a security group that allows: Inbound traffic on port 3128 (default Squid port) from your Splunk instances’ private subnet CIDR. Outbound traffic to anywhere (0.0.0.0/0) on HTTPS (port 443) to reach Akamai’s API. Install and Configure Squid: SSH into the EC2 instance. Install Squid:   sudo yum update -ysudo yum install squid -y Edit the Squid configuration file (/etc/squid/squid.conf)   sudo vi /etc/squid/squid.conf Add your Splunk instances’ subnet to allow access (replace 10.0.1.0/24 with your private subnet CIDR):   acl splunk_subnet src 10.0.1.0/24http_access allow splunk_subnet http_access deny all http_port 3128 Save and exit, then start Squid:   sudo systemctl start squidsudo systemctl enable squid Update Route Tables: Ensure your Splunk instances’ private subnet route table routes traffic destined for the proxy (e.g., the proxy’s private IP) to the proxy instance. You may not need this if the proxy is in the same VPC and reachable via its private IP. Record Proxy Details: Proxy Host: The private IP of the EC2 instance (e.g., 10.0.2.50). Proxy Port: 3128 (or whatever you set in squid.conf). Alternatively, if your organization prefers a managed solution, you could use an AWS NAT Gateway instead of a custom proxy:   Deploy a NAT Gateway in a public subnet. Update the private subnet route table to route 0.0.0.0/0 to the NAT Gateway. Note: NAT Gateways don’t require a specific “proxy host” configuration in the add-on; they transparently handle outbound traffic. However, the Akamai add-on may still expect a proxy, so a custom proxy might be more compatible.
@splunklearner  Step 1: Assess Your AWS Network Architecture Since your instances are internal, you likely have a Virtual Private Cloud (VPC) with private subnets. To enable outbound internet acce... See more...
@splunklearner  Step 1: Assess Your AWS Network Architecture Since your instances are internal, you likely have a Virtual Private Cloud (VPC) with private subnets. To enable outbound internet access:   Check if you already have a NAT Gateway or NAT Instance in a public subnet within your VPC. These are common AWS solutions for allowing private instances to access the internet. If not, you’ll need to set up a proxy server or coordinate with your network team to provide one.   Set Up a Proxy Server in AWS   Please check this    https://docs.aws.amazon.com/cli/v1/userguide/cli-configure-proxy.html  Be anonymous, create your own proxy server with AWS EC2 - DEV Community    I suggest reaching out to your Network team to set up a proxy.
DNS Server Access Client IP, Server IP, Query Data, Query Type, Query Bytes, Reponse Type, Response Data, Response Bytes, Timestamp Routers & Switches Firewall IDS Logs, Firewall Rules, Route... See more...
DNS Server Access Client IP, Server IP, Query Data, Query Type, Query Bytes, Reponse Type, Response Data, Response Bytes, Timestamp Routers & Switches Firewall IDS Logs, Firewall Rules, Router OS Logs, Routing Tables, NAT Logging VPN Service Remote Access + VPN Logging Network Traffic Metadata Netflow Network Traffic Content Zeek IDS/IPS alerts, rules, events   Access Control/Access Management TACACS | RADIUS | PKE Access Control/Access Management Accounting TACACS | RADIUS | PKE Accounting Logs
Hi @splunklearner  Do you have direct connectivity to your Akamai feed from the EC2 instance? If so you shouldnt need to configure a proxy. Please can you post a screenshot or link to where you are ... See more...
Hi @splunklearner  Do you have direct connectivity to your Akamai feed from the EC2 instance? If so you shouldnt need to configure a proxy. Please can you post a screenshot or link to where you are looking? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
UPD: I found the solution On the dashboard, I used a base search It seems that the "table" command cuts something important for the transaction command   index="hrz" (sourcetype="hrz_file_log" A... See more...
UPD: I found the solution On the dashboard, I used a base search It seems that the "table" command cuts something important for the transaction command   index="hrz" (sourcetype="hrz_file_log" AND "*is provisioning") OR (sourcetype="hrz_file_syslog" AND EventType="AGENT_STARTUP") | table _time, PoolId, MachineName, _raw | rex field=_raw "VM\s+(?<MachineName>.*)$"   After a dozen manual attempts, I found that results can vary without changing the time span Then I narrowed down the search to only one machine name to analyze the transaction behavior Eventually, I observed that the transaction remains open even if start and end events exist Then I replaced table with fields, and the transaction started to work as expected All in all, the working variant is:   index="hrz" (sourcetype="hrz_file_log" AND "*is provisioning") OR (sourcetype="hrz_file_syslog" AND EventType="AGENT_STARTUP") | table _time, PoolId, MachineName, _raw | rex field=_raw "VM\s+(?<MachineName>.*)$" <..>   If someone understands why transaction behavior is changing because of the table command, please supplement my answer
@kiran_panchavat how and where to create proxy server for this requirement? Please let me know.
The transaction command is returning "transactions" with only one event. Try something like this index="hrz" (sourcetype="hrz_file_log" AND "*is provisioning") OR (sourcetype="hrz_file_syslog" AND E... See more...
The transaction command is returning "transactions" with only one event. Try something like this index="hrz" (sourcetype="hrz_file_log" AND "*is provisioning") OR (sourcetype="hrz_file_syslog" AND EventType="AGENT_STARTUP") | rex field=_raw "VM\s+(?<MachineName>.*)$" | table _time, PoolId, MachineName, _raw | transaction MachineName startswith="Pool" endswith="startup" maxevents=2 keeporphans=false | where eventcount > 1 | search (PoolId="*") (MachineName="*") | search duration<=700 | stats min(duration) AS DurationMin, avg(duration) AS DurationAvg, max(duration) AS DurationMax, min(_time) AS StartTime, max(_time) AS EndTime BY PoolId | eval DurationMin = round(DurationMin, 2) | eval DurationAvg = round(DurationAvg, 2) | eval DurationMax = round(DurationMax, 2) | eval ProvDuration = round((EndTime - StartTime), 2) | eval StartTime = strftime(StartTime, "%Y-%m-%d %H:%M:%S.%3Q") | eval EndTime = strftime(EndTime, "%Y-%m-%d %H:%M:%S.%3Q") | table PoolId, DurationMin, DurationAvg, DurationMax, ProvDuration, StartTime EndTime
@splunklearner  If you don’t want to manage a proxy server, you could use a NAT Gateway in a public subnet to provide internet access to your private subnet. However, this won’t work directly with t... See more...
@splunklearner  If you don’t want to manage a proxy server, you could use a NAT Gateway in a public subnet to provide internet access to your private subnet. However, this won’t work directly with the Akamai add-on’s proxy settings, as it expects an HTTP/HTTPS proxy, not a network-layer NAT. Stick with a proxy server like Squid for compatibility.
@splunklearner  Since your Splunk instances can’t access the internet directly, you need a proxy server within your AWS environment (or on-premises, if applicable) that can:   Handle HTTPS traf... See more...
@splunklearner  Since your Splunk instances can’t access the internet directly, you need a proxy server within your AWS environment (or on-premises, if applicable) that can:   Handle HTTPS traffic (port 443, as Akamai uses secure endpoints). Be accessible from your Splunk instances in the private subnet. Route traffic to Akamai’s servers (e.g., DataStream endpoints or API hosts). You likely don’t have a proxy server set up yet, so you’ll need to create one.