All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I have set up a home lab with splunk. I have splunk enterprise on my admin Windows vm where i make all the changes, and a second Windows vm that is the "Victim".  Then i have an attack machin... See more...
Hello, I have set up a home lab with splunk. I have splunk enterprise on my admin Windows vm where i make all the changes, and a second Windows vm that is the "Victim".  Then i have an attack machine that is kali linux. all machines are on the same network, i can ping each machine each way. The idea is to simulate real world SOC experience.  I have splunk forwarder installed on the victim machine. I am forwarding all windows logs (Sytem, security, application, and setup).  I have ran multiple NMAP scans on the victim machine. I have forward the logs to an Index called "wineventlog". and my machine is called "Victim".  I have used the splunk guide on detecting port scanning and it yields no results, i have also use security essentials "internal horizontal scan". and it gives an error. Ive also checked and i am getting logs sent, i can see them in the index.  I have no idea why none of my searches are not working. I dont know where to begin, am i not getting the right data forwarded to splunk? am i not searching right? or have i missed a step?  Please note i have absolutely no experience with splunk. Im from an IT background so im not too useless but im absolutely lost when it comes to splunk.  Any helps or suggestions is much much much appreciated and needed. I am totally lost. Apologies if i have not provided enough information, can provide more if needed.  Pictures included in post.       
I'm trying to represent this: ,but I can't quite do it. I can't manage to display the last 5 days before the current date in a column. I've managed to do this, but I'd have to manage to ex... See more...
I'm trying to represent this: ,but I can't quite do it. I can't manage to display the last 5 days before the current date in a column. I've managed to do this, but I'd have to manage to extract the current date automatically via _time, and place it in a column, and have the start time and end time values in these columns:  
This I can work out from your case function in your SPL. What I can't work out is what you want your results to look like based on the sample data you have shared. The first graphic bears only passin... See more...
This I can work out from your case function in your SPL. What I can't work out is what you want your results to look like based on the sample data you have shared. The first graphic bears only passing resemblance to the data you have shown. Please try and explain what you are trying to do.
Hello @AShwin1119 Would you be able to confirm why we need to have props / transforms configuration? As far as I understand, we can just have default _TCP_ROUTING configured to send all SH events to ... See more...
Hello @AShwin1119 Would you be able to confirm why we need to have props / transforms configuration? As far as I understand, we can just have default _TCP_ROUTING configured to send all SH events to Indexers through outputs configurations - isn't it? Ref Doc - https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Forwardsearchheaddata
Hi there, Is it possible for the AppDynamics Node.JS packages to be updated to fully declare their dependencies? We use Yarn to manage Node.JS packages. Yarn is more strict than npm about package d... See more...
Hi there, Is it possible for the AppDynamics Node.JS packages to be updated to fully declare their dependencies? We use Yarn to manage Node.JS packages. Yarn is more strict than npm about package dependencies, and trips up when installing appdynamics version 24.10.0 because several sub-packages don't declare all of their dependencies. We can work around this by using package extensions in our Yarn config file (.yarnrc.yml): packageExtensions: "appdynamics-protobuf@*": dependencies: "https-proxy-agent": "*" "fs-extra": "*" "tar": "*" "appdynamics-libagent-napi@*": dependencies: "fs-extra": "*" "tar": "*" "appdynamics-native@*": dependencies: "https-proxy-agent": "*" "fs-extra": "*" "tar": "*" This is a point of friction, though, because it must be done in every repository. Examining the AppDynamics packages, I see they use a preinstall script similar to the following: npm install https-proxy-agent@5.0.0 fs-extra@2.0.0 tar@5.0.11 && node install.js appdynamics-libagent-napi-native appdynamics-libagent-napi 10391.0.0 This seems like the source of the undeclared package dependencies. As a side effect of declaring the dependencies, I believe the first command (npm install) could be removed from the preinstall script. This would also remove the reliance on npm, which will play better with non-npm projects: many projects don't use npm (e.g. we're using Yarn), and relying on npm in the preinstall script can cause issues that are difficult to troubleshoot.
This worked great. For anyone else, I verified the totals between the new and old searches just in case with | addtotals row=f col=t  Thank you!
Hi @vvkarur  You can use the rex field, like this example. | rex field=_raw "\"role\"\:\"(?<role>[^,\"]+)\""  
-> "Completed with Warnings" RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWW" -> "Successful Launch" RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FOK" -> "Failure" RUNMAJORSTATUS="FIN"   AND RUNMINORST... See more...
-> "Completed with Warnings" RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWW" -> "Successful Launch" RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FOK" -> "Failure" RUNMAJORSTATUS="FIN"   AND RUNMINORSTATUS="FWF" -> "In Progress" RUNMAJORSTATUS="STA"   AND RUNMINORSTATUS="RUN"
Not really a problem for my case, i just wanted to post here maybe someone will need to know this. The hidden search approach to solve this issue has the downside that it doesnt work with real-time ... See more...
Not really a problem for my case, i just wanted to post here maybe someone will need to know this. The hidden search approach to solve this issue has the downside that it doesnt work with real-time searches. (e.g. when you select 30 minute window from the time picker)  
With tstats you have to be a bit creative but yes, you can do it. You have to do tstats over finer time division and then aggregate with timechart to a coarser timespan. For example | tstats prestat... See more...
With tstats you have to be a bit creative but yes, you can do it. You have to do tstats over finer time division and then aggregate with timechart to a coarser timespan. For example | tstats prestats=t count where index=something by source _time span=1m | timechart span=10m aligntime=300 count by source
Hi @SN1 , probably that day someone closed the firewall port between Forwarder and Indexer. The port should be 9997. if this is the port, you can try using telnet from the Forwarder: telnet <host... See more...
Hi @SN1 , probably that day someone closed the firewall port between Forwarder and Indexer. The port should be 9997. if this is the port, you can try using telnet from the Forwarder: telnet <host_ip> <port> Ciao. Giuseppe
The error you're seeing suggests a network connectivity issue between your forwarder and the receiving Splunk instance (likely an Indexer or Heavy Forwarder). Here are some steps to troubleshoot: V... See more...
The error you're seeing suggests a network connectivity issue between your forwarder and the receiving Splunk instance (likely an Indexer or Heavy Forwarder). Here are some steps to troubleshoot: Verify network connectivity: - Can you connect to the destination host from the forwarder (Try using netcat with something like `nc -vz -w1 <destinationIP> <destinationPort>` Is the specified port open and accessible on the destination server (Is Splunk listening?) Are any other hosts able to connect and send data? Check firewall rules: - Ensure no firewall is blocking the connection on either end. Verify Splunk configurations: - On the forwarder, check outputs.conf for correct destination settings. - On the receiving end, verify inputs.conf for proper port configurations. Restart Splunk services: - Sometimes a restart can resolve connectivity issues, try restarting the forwarder, if no progress then try restart Splunk on the receiver to confirm it is working correctly. Check for any recent network changes - Were there any infrastructure modifications around January 29th? Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
When you ran the telnet check, was this from the same host you are trying to access Splunk with via the browser or from the Splunk server itself? If this was checked from the Splunk server then I wo... See more...
When you ran the telnet check, was this from the same host you are trying to access Splunk with via the browser or from the Splunk server itself? If this was checked from the Splunk server then I would suggest checking the firewall rules on that host if either `iptables` or `firewalld` is configured to allow inbound traffic on port 8000. You can check your firewall rules with: `sudo iptables -L` or `sudo firewall-cmd --list-all` depending how this is configured on your host. Please check if you are using https in your URL if Splunk has been configured with SSL enabled.   
hello we are unable to receive logs from forwarders from 29 january. i checked splund.log and found this error ERROR TcpOutputFd [110883 TcpOutEloop] - Connection to host=<ip>:port failed what sh... See more...
hello we are unable to receive logs from forwarders from 29 january. i checked splund.log and found this error ERROR TcpOutputFd [110883 TcpOutEloop] - Connection to host=<ip>:port failed what should I do?
Refer to the tables in my original post. I'm doing a count of events per each span using tstats.  So just how many events were there from 00:00-04:00, 04:00-08:00 etc. But splunk chooses that startin... See more...
Refer to the tables in my original post. I'm doing a count of events per each span using tstats.  So just how many events were there from 00:00-04:00, 04:00-08:00 etc. But splunk chooses that starting point of 00:00 and sometimes it's a very poor choice so I would like to be able to adjust it. So instead it would be 01:00-05:00, 05:00-09:00 etc. The methods I've found in the forum do not seem to work with tstats. As shown in my second table, the _time labels are adjusted but the values are not recalculated. 
I want to be able to support adaptive response action in Splunk Enterprise Security but when I put some value there Im getting the error, even its not empty  what I did wrong?         
Does it work with tstats?
Ah, I do see it now, thanks. I was assuming all data would be included in one of the "Workload" (e.g. "Exchange") or "app" data values, but the sourcetype "o365:reporting:messagetrace" does not have ... See more...
Ah, I do see it now, thanks. I was assuming all data would be included in one of the "Workload" (e.g. "Exchange") or "app" data values, but the sourcetype "o365:reporting:messagetrace" does not have "Workload" or "app" data values and I was excluding the message trace events with search parameters like "Workload="*"" Appreciate the help! 
This looks like JSON so you should ingest it as such. Alternatively, you could use spath to extract the fields. Alternatively, look at the json functions.
I have string like this , {"code":"1234","bday":"15-02-06T07:02:01.731+00:00" "name":"Alex", "role":"student","age":"16"}, and I want to extract role from this string. Can any one suggest way in splu... See more...
I have string like this , {"code":"1234","bday":"15-02-06T07:02:01.731+00:00" "name":"Alex", "role":"student","age":"16"}, and I want to extract role from this string. Can any one suggest way in splunk logs?