All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I noticed my iPad is getting data and I didn’t open a splunk account. How do I figure out where it is going?
Hi, I am trying to install Splunk version 0.116.0 in an EKS cluster but getting error in operator pod: webhook \minstrumentation.kb.io\ : failed to call webhook: Post \"https://splunk-otel-collec... See more...
Hi, I am trying to install Splunk version 0.116.0 in an EKS cluster but getting error in operator pod: webhook \minstrumentation.kb.io\ : failed to call webhook: Post \"https://splunk-otel-collector-operator-webhook.namespace.svc:443/mutate-opentelemetry-io-v1aplha1-instrumentation?timeout=10s: no endpoints available of service"\ . I am unable to understand what is going on here. I have been following the latest docs on 0.116.0 onwards install guide. I had no problem on 0.113.0. The doc does mention to see a similar error, but there was no race condition while enabling operator. Pods are running successfully but operator pod logs show the above error. Thanks Divya
I am not sure where to even start on this one.    I have 2 log file types I need to extract data to get final accounts. I need to combine by objectClasses so that when on a given day "ial to enforc... See more...
I am not sure where to even start on this one.    I have 2 log file types I need to extract data to get final accounts. I need to combine by objectClasses so that when on a given day "ial to enforce" in log Type 2 is sets the count for number of Type 1 events. I need to run this over a year. Thank you in advance!!!!  -----Type 1 2025-01-01 00:00:00,125 trackingid="tid:13256464"message='{"UserAccessSubmission":{"uuid":"abc123","mail":"sean@southpark.net","trackingId":"tid:13256464","objectClass":"cartmanUser","csp":"Butters"}}' 2025-01-01 00:01:00,125 trackingid="tid:13256464"message='{"UserAccessSubmission":{"uuid":"abc123","mail":"sean@southpark.net","trackingId":"tid:13256464","objectClass":"cartmanUser","csp":"Butters"}}' 2025-01-02 00:01:00,125 trackingid="tid:13256464"message='{"UserAccessSubmission":{"uuid":"abc123","mail":"sean@southpark.net","trackingId":"tid:13256464","objectClass":"cartmanUser","csp":"Butters"}}' 2025-01-02 00:01:00,125 trackingid="tid:13256464"message='{"UserAccessSubmission":{"uuid":"abc123","mail":"sean@southpark.net","trackingId":"tid:13256464","objectClass":"StanUser","csp":"Butters"}}' 2025-01-02 00:01:00,125 trackingid="tid:13256464"message='{"UserAccessSubmission":{"uuid":"abc123","mail":"sean@southpark.net","trackingId":"tid:13256464","objectClass":"StanUser","csp":"Butters"}}'   ------- Type 2 { [-] @message: { [-] attributeContract: { [-] extendedAttributes: [ [-] ] maskOgnlValues: false uniqueUserKeyAttribute: uuid } attributeMapping: { [-] attributeContractFulfillment: { [-] uuid: { [-] source: { [-] type: ADAPTER } value: uuid } } attributeSources: [ [-] ] issuanceCriteria: { [-] conditionalCriteria: [ [-] ] } } configuration: { [-] fields: [ [-] { [-] name: Application ObjectClass value: cartmanUser } { [-] name: Application Entitlement Attribute value: cartmanRole } { [-] name: IAL to Enforce value: 2 } } id: Cartman name: Cartman } @timestamp: 2025-01-01T00:00:01.833685 }   { [-] @message: { [-] attributeContract: { [-] extendedAttributes: [ [-] ] maskOgnlValues: false uniqueUserKeyAttribute: uuid } attributeMapping: { [-] attributeContractFulfillment: { [-] uuid: { [-] source: { [-] type: ADAPTER } value: uuid } } attributeSources: [ [-] ] issuanceCriteria: { [-] conditionalCriteria: [ [-] ] } } configuration: { [-] fields: [ [-] { [-] name: Application ObjectClass value: cartmanUser } { [-] name: Application Entitlement Attribute value: cartmanRole } { [-] name: IAL to Enforce value: 1 } } id: Cartman name: Cartman } @timestamp: 2025-01-02T00:00:01.833685 }     The Goal would be to get something like this Table 1   Ial to enforce is 2 CartmanUser 2     Table 2   Ial to enforce is 1 CartmanUser 1  
So, you would use it like this index = main source=xyz (TERM(A1) OR TERM(A2) ) ("- ENDED" OR "- STARTED" ) | rex field=TEXT "((A1-) |(A2-) )(?<Func>[^\-]+)" | eval Function=trim(Func), DAT = str... See more...
So, you would use it like this index = main source=xyz (TERM(A1) OR TERM(A2) ) ("- ENDED" OR "- STARTED" ) | rex field=TEXT "((A1-) |(A2-) )(?<Func>[^\-]+)" | eval Function=trim(Func), DAT = strftime(relative_time(_time, "+0h"), "%d/%m/%Y") | rename DAT as Date_of_reception | eval {Function}_TIME=_time | stats values(Date_of_reception) as Date_of_reception values(*_TIME) as *_TIME by JOBNAME ``` This adds in all the entries in the lookup at the end of the current results ``` | inputlookup append=t File.csv ``` This then joins all the lookup fields to your result data based on JOBNAME ``` | stats values(*) as * by JOBNAME ``` Now order the fields as needed and sort ``` | table JOBNAME Description Date_of_reception STARTED_TIME ENDED_TIME | sort -STARTED_TIME i.e. append all the lookup data to the end and collapse it on JOBNAME Note that your _time handling is a little strange - not sure what you're trying to do, but what's wrong with just | eval DATE_of_reception=strftime(_time, "%d/%m/%Y") Note also that if you have more than 1 START_TIME or END_TIME, the sort will not work correctly on the multivalue field.
Glad it worked, these types of makeresults search are insignificant, they only ever sit on the search head as they are never searching data from the indexers. I often use background searches and tok... See more...
Glad it worked, these types of makeresults search are insignificant, they only ever sit on the search head as they are never searching data from the indexers. I often use background searches and tokens to create data that can then be used in <html> panels. They don't consume much.
We also have this issue after upgrading to 9.4.0.  Deployment server still works at deploying apps so we have been ignoring it.  A solution would be nice though.
Hi @PickleRick , totally agree with you, some business models provide interesting challenges let's say..... Can you use OS environment variables in the inputs.conf? If so, would they only be read o... See more...
Hi @PickleRick , totally agree with you, some business models provide interesting challenges let's say..... Can you use OS environment variables in the inputs.conf? If so, would they only be read on UF start up? Cheers Andre
Using the data in the second picture, please show how you want it displayed in the layout of the first picture. It is not clear what the relationship between the two sets of data is.
Hello, I have set up a home lab with splunk. I have splunk enterprise on my admin Windows vm where i make all the changes, and a second Windows vm that is the "Victim".  Then i have an attack machin... See more...
Hello, I have set up a home lab with splunk. I have splunk enterprise on my admin Windows vm where i make all the changes, and a second Windows vm that is the "Victim".  Then i have an attack machine that is kali linux. all machines are on the same network, i can ping each machine each way. The idea is to simulate real world SOC experience.  I have splunk forwarder installed on the victim machine. I am forwarding all windows logs (Sytem, security, application, and setup).  I have ran multiple NMAP scans on the victim machine. I have forward the logs to an Index called "wineventlog". and my machine is called "Victim".  I have used the splunk guide on detecting port scanning and it yields no results, i have also use security essentials "internal horizontal scan". and it gives an error. Ive also checked and i am getting logs sent, i can see them in the index.  I have no idea why none of my searches are not working. I dont know where to begin, am i not getting the right data forwarded to splunk? am i not searching right? or have i missed a step?  Please note i have absolutely no experience with splunk. Im from an IT background so im not too useless but im absolutely lost when it comes to splunk.  Any helps or suggestions is much much much appreciated and needed. I am totally lost. Apologies if i have not provided enough information, can provide more if needed.  Pictures included in post.       
I'm trying to represent this: ,but I can't quite do it. I can't manage to display the last 5 days before the current date in a column. I've managed to do this, but I'd have to manage to ex... See more...
I'm trying to represent this: ,but I can't quite do it. I can't manage to display the last 5 days before the current date in a column. I've managed to do this, but I'd have to manage to extract the current date automatically via _time, and place it in a column, and have the start time and end time values in these columns:  
This I can work out from your case function in your SPL. What I can't work out is what you want your results to look like based on the sample data you have shared. The first graphic bears only passin... See more...
This I can work out from your case function in your SPL. What I can't work out is what you want your results to look like based on the sample data you have shared. The first graphic bears only passing resemblance to the data you have shown. Please try and explain what you are trying to do.
Hello @AShwin1119 Would you be able to confirm why we need to have props / transforms configuration? As far as I understand, we can just have default _TCP_ROUTING configured to send all SH events to ... See more...
Hello @AShwin1119 Would you be able to confirm why we need to have props / transforms configuration? As far as I understand, we can just have default _TCP_ROUTING configured to send all SH events to Indexers through outputs configurations - isn't it? Ref Doc - https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Forwardsearchheaddata
Hi there, Is it possible for the AppDynamics Node.JS packages to be updated to fully declare their dependencies? We use Yarn to manage Node.JS packages. Yarn is more strict than npm about package d... See more...
Hi there, Is it possible for the AppDynamics Node.JS packages to be updated to fully declare their dependencies? We use Yarn to manage Node.JS packages. Yarn is more strict than npm about package dependencies, and trips up when installing appdynamics version 24.10.0 because several sub-packages don't declare all of their dependencies. We can work around this by using package extensions in our Yarn config file (.yarnrc.yml): packageExtensions: "appdynamics-protobuf@*": dependencies: "https-proxy-agent": "*" "fs-extra": "*" "tar": "*" "appdynamics-libagent-napi@*": dependencies: "fs-extra": "*" "tar": "*" "appdynamics-native@*": dependencies: "https-proxy-agent": "*" "fs-extra": "*" "tar": "*" This is a point of friction, though, because it must be done in every repository. Examining the AppDynamics packages, I see they use a preinstall script similar to the following: npm install https-proxy-agent@5.0.0 fs-extra@2.0.0 tar@5.0.11 && node install.js appdynamics-libagent-napi-native appdynamics-libagent-napi 10391.0.0 This seems like the source of the undeclared package dependencies. As a side effect of declaring the dependencies, I believe the first command (npm install) could be removed from the preinstall script. This would also remove the reliance on npm, which will play better with non-npm projects: many projects don't use npm (e.g. we're using Yarn), and relying on npm in the preinstall script can cause issues that are difficult to troubleshoot.
This worked great. For anyone else, I verified the totals between the new and old searches just in case with | addtotals row=f col=t  Thank you!
Hi @vvkarur  You can use the rex field, like this example. | rex field=_raw "\"role\"\:\"(?<role>[^,\"]+)\""  
-> "Completed with Warnings" RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWW" -> "Successful Launch" RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FOK" -> "Failure" RUNMAJORSTATUS="FIN"   AND RUNMINORST... See more...
-> "Completed with Warnings" RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWW" -> "Successful Launch" RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FOK" -> "Failure" RUNMAJORSTATUS="FIN"   AND RUNMINORSTATUS="FWF" -> "In Progress" RUNMAJORSTATUS="STA"   AND RUNMINORSTATUS="RUN"
Not really a problem for my case, i just wanted to post here maybe someone will need to know this. The hidden search approach to solve this issue has the downside that it doesnt work with real-time ... See more...
Not really a problem for my case, i just wanted to post here maybe someone will need to know this. The hidden search approach to solve this issue has the downside that it doesnt work with real-time searches. (e.g. when you select 30 minute window from the time picker)  
With tstats you have to be a bit creative but yes, you can do it. You have to do tstats over finer time division and then aggregate with timechart to a coarser timespan. For example | tstats prestat... See more...
With tstats you have to be a bit creative but yes, you can do it. You have to do tstats over finer time division and then aggregate with timechart to a coarser timespan. For example | tstats prestats=t count where index=something by source _time span=1m | timechart span=10m aligntime=300 count by source
Hi @SN1 , probably that day someone closed the firewall port between Forwarder and Indexer. The port should be 9997. if this is the port, you can try using telnet from the Forwarder: telnet <host... See more...
Hi @SN1 , probably that day someone closed the firewall port between Forwarder and Indexer. The port should be 9997. if this is the port, you can try using telnet from the Forwarder: telnet <host_ip> <port> Ciao. Giuseppe
The error you're seeing suggests a network connectivity issue between your forwarder and the receiving Splunk instance (likely an Indexer or Heavy Forwarder). Here are some steps to troubleshoot: V... See more...
The error you're seeing suggests a network connectivity issue between your forwarder and the receiving Splunk instance (likely an Indexer or Heavy Forwarder). Here are some steps to troubleshoot: Verify network connectivity: - Can you connect to the destination host from the forwarder (Try using netcat with something like `nc -vz -w1 <destinationIP> <destinationPort>` Is the specified port open and accessible on the destination server (Is Splunk listening?) Are any other hosts able to connect and send data? Check firewall rules: - Ensure no firewall is blocking the connection on either end. Verify Splunk configurations: - On the forwarder, check outputs.conf for correct destination settings. - On the receiving end, verify inputs.conf for proper port configurations. Restart Splunk services: - Sometimes a restart can resolve connectivity issues, try restarting the forwarder, if no progress then try restart Splunk on the receiver to confirm it is working correctly. Check for any recent network changes - Were there any infrastructure modifications around January 29th? Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will