All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

To add to @ITWhisperer 's answer - Splunk processes timestamp as "unix timestamps" - integers containing number of seconds since epoch. As such, timestamp is "timezoneless", it's just rendered when n... See more...
To add to @ITWhisperer 's answer - Splunk processes timestamp as "unix timestamps" - integers containing number of seconds since epoch. As such, timestamp is "timezoneless", it's just rendered when needed into a string, possibly containing a timezone description. But timestamp is always (when rendered automatically by WebUI or explicitly when strftime is called) rendered in user's timezone (the one set in user's preferences). So while with strptime you can read and apply the timezone offset from the string representation of a given point in time, strftime doesn't let you specify the timezone freely. The only thing you can do is "cheat" a bit by manually adjusting the timezone with the offset to another timezone and render it in your local timezone but without displaying said timezone.
Hi, Firstly, thanks for the fast reply however there are cases where the users are required to access both sensitive and non-sensitive indexes at the same time using the same user.  Another concern... See more...
Hi, Firstly, thanks for the fast reply however there are cases where the users are required to access both sensitive and non-sensitive indexes at the same time using the same user.  Another concern is on the scaling factor.     Below is my scenario 
  Hello,  when I run the below SPL , it gave me all the region that a user have accessed from. if I want to exclude a region or country from the list, please where do I add the SPL query and what i... See more...
  Hello,  when I run the below SPL , it gave me all the region that a user have accessed from. if I want to exclude a region or country from the list, please where do I add the SPL query and what is the SPL. I have used several exclusion query but it didn't work. please help      | tstats count(Authentication.user) FROM datamodel=Authentication WHERE (index=* OR index=*) BY Authentication.action Authentication.src | rename Authentication.* AS * | iplocation src | where len(Country)>0 AND len(City)>0
Also, how we could capture the terminated users who are accessing their accounts on daily basis.  We created the information point using the method name and class but termination date is not getting ... See more...
Also, how we could capture the terminated users who are accessing their accounts on daily basis.  We created the information point using the method name and class but termination date is not getting in response. Do we have any other options to capture this terminated users and age >59+ users accessing their accounts on daily basis?
Yes, but not to the core level. Any input on how to capture the age >59+ users accessing their accounts on daily basis?
Hi @Viveklearner , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @Anton , as I said, you have to divide your data in two indexes: one with events containing sensitive data, another containing all the other events. then you have two create a specific role ... See more...
Hi @Anton , as I said, you have to divide your data in two indexes: one with events containing sensitive data, another containing all the other events. then you have two create a specific role to access this index (only this role is enabled to access this data), assigning this role only to the user to enable. in this way, only the enabled user should access the index, even if it has also other roles. The important thing is that you have a dedicated index for this because data access grants in Splunk are defined at index level. Let me know (and at all the Community) what Support will say to you. let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated Ciao. Giuseppe
Hello @ITWhisperer  can you give me one example on how to convert that
that's exactly what I was after! thanks mate
You need to clarify the question. Examples: This query seems to provide the correct results but it does not show the null values as above(n.b. the return 1000 isn't optimal code, I'd prefer it to s... See more...
You need to clarify the question. Examples: This query seems to provide the correct results but it does not show the null values as above(n.b. the return 1000 isn't optimal code, I'd prefer it to simply return all of the results): What do you mean by "null values" here?  Do you mean the 0s that fillnull command supplied?  You can apply the same trick as before, and it will have the same effect. (BTW, do not do fillnull outside of append subsearch.  That adds to compute.) dataFeedTypeId=AS [| inputlookup approvedsenders | fields Value | return 9999 $Value] | stats count as cnt_sender by sender | append [inputlookup approvedsenders | fields Value | rename Value AS sender] | fillnull cnt_sender | stats sum(cnt_sender) as cnt_sender by sender (I changed 1000 to 9999 just as a suggestion simply give it a large enough number.)  Is this what you are looking for?
Hi @gcusello However, the authentication is based on the connection order. So if this sensitive user is authenticated based on the first group based on order, he/she will only be able to have the ro... See more...
Hi @gcusello However, the authentication is based on the connection order. So if this sensitive user is authenticated based on the first group based on order, he/she will only be able to have the role to see the index mapped to the first group.  I asked Splunk support officially on this and they do not have any idea/solution on this.  Regards, Anton 
Hi,  My company also faced the same issue. Splunk have a very restricted way of doing RBAC. Until now they do not have any fixes on this. Do you have any luck on this thus far?  Regards, Anton An... See more...
Hi,  My company also faced the same issue. Splunk have a very restricted way of doing RBAC. Until now they do not have any fixes on this. Do you have any luck on this thus far?  Regards, Anton Anton Yeo 
So, after asking for Splunk Support, they said: 1. The possible reasons and conditions under which the fishbucket could exceed the configured threshold of 500MB. It is because of the amount of data... See more...
So, after asking for Splunk Support, they said: 1. The possible reasons and conditions under which the fishbucket could exceed the configured threshold of 500MB. It is because of the amount of data ingestion you are doing per day & the fishbucket can be up to 2 or 3 times larger than the configured limit. And this happens because of its backup mechanism with file save and snapshot.tmp 2. If there are any log files or diagnostic tools within Splunk that can help us track and understand the growth of the fishbucket index. If you have the nmon app installed we found that it was contributing to the fishbucket's rapid growth. 3.The absolute maximum size that the fishbucket can reach within the Splunk system. There is no strict maximum size for the splunk fish bucket. It is the size influenced by factors like the volume of data being ingested, the frequency of indexing & the specific configuration of the your splunk environment. 4.Any factors that could contribute to the fishbucket exceeding the expected maximum by such a substantial margin. It can only grow with the time, volume of the data, frequency of indexing.
Hello Everyone, I have got the cluster agent & operator installed successfully and trying to auto-instrument the java agent. Cluster Agent is able to pickup the rules and identify the pod for instr... See more...
Hello Everyone, I have got the cluster agent & operator installed successfully and trying to auto-instrument the java agent. Cluster Agent is able to pickup the rules and identify the pod for instrumentation. When it starts creating the replica set, it's crashes the new pod with below error: ➜ kubectl logs --previous --tail 100 -p pega-web-d8487455c-5btrc ____ ____ _ | _ \ ___ __ _ __ _ | _ \ ___ ___| | _____ _ __ | |_) / _ \/ _` |/ _` | | | | |/ _ \ / __| |/ / _ \ '__| | __/ __/ (_| | (_| | | |_| | (_) | (__| < __/ | |_| \___|\__, |\__,_| |____/ \___/ \___|_|\_\___|_| |___/ v3.1.790 .. .. NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED Picked up JAVA_TOOL_OPTIONS: -Dappdynamics.agent.accountAccessKey=640299c1-a74f-47fc-96df-63e1e7188146 -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.socket.collection.bci.enable=true -javaagent:/opt/appdynamics-java/javaagent.jar Error opening zip file or JAR manifest missing : /opt/appdynamics-java/javaagent.jar Error occurred during initialization of VM agent library failed to init: instrument Cluster Agents Logs: [DEBUG]: 2023-11-23 00:48:09 - logpublisher.go:135 - Finished with all the requests for publishing logs [DEBUG]: 2023-11-23 00:48:09 - logpublisher.go:85 - Time taken for publishing logs: 187.696677ms [ERROR]: 2023-11-23 00:48:09 - executor.go:73 - Command basename `find /opt/appdynamics-java/ -maxdepth 1 -type d -name '*ver*'` returned an error when exec on pod pega-web-d8487455c-5btrc. unable to upgrade connection: container not found ("pega-pega-web-tomcat") [WARNING]: 2023-11-23 00:48:09 - executor.go:78 - Issues getting exit code of command 'basename `find /opt/appdynamics-java/ -maxdepth 1 -type d -name '*ver*'`' in container pega-pega-web-tomcat in pod ccre-sandbox/pega-web-d8487455c-5btrc [ERROR]: 2023-11-23 00:48:09 - javaappmetadatahelper.go:45 - Failed to get version folder name in container: pega-pega-web-tomcat, pod: ccre-sandbox/pega-web-d8487455c-5btrc, verFolderName: '', err: failed to find exit code [WARNING]: 2023-11-23 00:48:09 - podhandler.go:149 - Unable to find node name in pod ccre-sandbox/pega-web-d8487455c-5btrc, container pega-pega-web-tomcat [DEBUG]: 2023-11-23 00:48:09 - podhandler.go:75 - Pod ccre-sandbox/pega-web-d8487455c-5btrc is in Pending state with annotations to be u Kubectl exec throwing same error: ➜ kubectl -n sandbox get pod NAME READY STATUS RESTARTS AGE pega-backgroundprocessing-59f58bb79d-xm2bt 1/1 Running 3 (3h34m ago) 9h pega-batch-8f565b465-khvgd 0/1 Init:0/1 0 2s pega-batch-9ff8965fd-k4jgb 1/1 Running 3 (3h34m ago) 9h pega-web-55c7459cb9-xrhl9 0/1 Init:0/1 0 1s pega-web-57497b54d6-q28sr 1/1 Running 16 (3h33m ago) 9h ➜ kubectl -n sandbox get pod NAME READY STATUS RESTARTS AGE pega-backgroundprocessing-59f58bb79d-xm2bt 1/1 Running 3 (3h34m ago) 9h pega-batch-5d6c474c85-rkmxx 0/1 ContainerCreating 0 2s pega-batch-8f565b465-khvgd 0/1 Terminating 0 7s pega-batch-9ff8965fd-k4jgb 1/1 Running 3 (3h34m ago) 9h pega-web-55c7459cb9-xrhl9 0/1 Terminating 0 6s pega-web-57497b54d6-q28sr 1/1 Running 16 (3h33m ago) 9h pega-web-d8487455c-5btrc 0/1 Pending 0 1s ➜ kubectl exec -it pega-batch-5d6c474c85-rkmxx -n sandbox bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. error: unable to upgrade connection: container not found ("pega-pega-batch-tomcat") Also workload pod is running with custom user and not as root user, not sure if that's an issue for permission to copy the java agent binary? I assume it should not. I would appreciate if you could point in the right direct to troubleshoot this issue. PS: We are able to successfully do init-container method to instrument java-agent and it works fine. Document reference: https://docs.appdynamics.com/appd/23.x/latest/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/auto-instrument-applications-with-the-cluster-agent 
So I had a javascript file running on my dashboard and it was working fine. Recently I have uploaded an updated version of that javascript file. It shows my updated version on some computers or some... See more...
So I had a javascript file running on my dashboard and it was working fine. Recently I have uploaded an updated version of that javascript file. It shows my updated version on some computers or some users maybe? But for some people they are still seeing the old javascript file being served. Any idea on how to make sure that all users get the new javascript file served to them when they load the dashboard? I have tried clearing cache and restarting computer multiple times. I have also tried a /debug/refresh and _bump which obviously didnt solve my issue.
It worked.. thanks a lot
Please elaborate on "it doesn't work".  Which command in the provided list is failing?  What makes you think it's not working?  What error message(s) do you see?  What documentation are you following... See more...
Please elaborate on "it doesn't work".  Which command in the provided list is failing?  What makes you think it's not working?  What error message(s) do you see?  What documentation are you following?  Have you tried installing Splunk directly on your Mac without a VM?
Try this query to list all of the source files sent by a given host. | tstats count where index=xxx host=servername by source | fields - count
Hi, thank you. I had it wrong actually, my apologies. What I need is to identify the log paths that are actively used on the logbinder servers.  How do I locate these paths using search and reporting... See more...
Hi, thank you. I had it wrong actually, my apologies. What I need is to identify the log paths that are actively used on the logbinder servers.  How do I locate these paths using search and reporting this is my query so far: index=xxx servername source="xlmwindevenlog:security"     Thanks again!
Hello Thanks ! it looks good but i still have few issues : i configured this : <style> #trellis_pie div.facets-container div.viz-panel:nth-child(1) g.highcharts-series path ... See more...
Hello Thanks ! it looks good but i still have few issues : i configured this : <style> #trellis_pie div.facets-container div.viz-panel:nth-child(1) g.highcharts-series path { fill: blue !important; } #trellis_pie div.facets-container div.viz-panel:nth-child(2) g.highcharts-series path { fill: yellow !important; } #trellis_pie div.facets-container div.viz-panel:nth-child(3) g.highcharts-series path { fill: red !important; } #trellis_pie div.facets-container div.viz-panel:nth-child(4) g.highcharts-series path { fill: green !important; } #trellis_pie div.facets-container div.viz-panel:nth-child(5) g.highcharts-series path { fill: gray !important; } </style> if i understand correctly, the order of the colors is the order of the conditions in the "case" so, in that case, "High Exposure" supposed to be red but actually its blue, "Low Exposure" supposed to be blue but its yellow and "Medium Exposure" supposed to be yellow but its red, the other two does not shown but it supposed to.  also, i don't see the number of results in the pie, i just see "other" even thought Minimum size set to 0