index="aws_np" [| makeresults
| eval earliest=strptime("12/03/2025 13:00","%d/%m/%Y %H:%M")
| eval latest=relative_time(earliest,"+1d")
| table earliest latest]
| rex field=...
See more...
index="aws_np" [| makeresults
| eval earliest=strptime("12/03/2025 13:00","%d/%m/%Y %H:%M")
| eval latest=relative_time(earliest,"+1d")
| table earliest latest]
| rex field=_raw "messageGUID\": String\(\"(?<messageGUID>[^\"]+)"
| rex field=_raw "source\": String\(\"(?<source>[^\"]+)"
| rex field=_raw "type\": String\(\"(?<type>[^\"]+)"
| rex field=_raw "addBy\": String\(\"(?<addBy>[^\"]+)"
| where type="Contact"
| stats count by source I tried exactly same way , Error in 'search' command: Unable to parse the search: 'AND' operator is missing a clause on the left hand side.
Thank you for the quick responses. @PickleRick Your answer makes it clear I should go with a deployment server. I'm still a bit confused, if the default is disable = false, shouldn't it alread...
See more...
Thank you for the quick responses. @PickleRick Your answer makes it clear I should go with a deployment server. I'm still a bit confused, if the default is disable = false, shouldn't it already be enabled ? ./system/local/serverclass.conf exists but it is empty. @isoutamo Our instance will be long lived What I did Changed system/local/serverclass.conf to contain [global] disabled = false copied splunk-add-on-for-unix-and-linux_1000.tgz to /opt/splunk/etc/deployment-apps added port 8089 to docker-compose.yml docker compose down / up opened port 8089 on our docker host firewall. On the linux host I want to monitor I removed pre-exisitng local config and executed /opt/splunkforwarder/bin/splunk set deploy-poll dockerhost:8089 /opt/splunkforwarder/bin/splunk stop /opt/splunkforwarder/bin/splunk start Initially I saw no difference but now /splunk/en-GB/manager/launcher/agent_management?tab=forwarders show my client. Thank you
But when I am running this I am getting Error in 'search' command: Unable to parse the search: 'AND' operator is missing a clause on the left hand side.
Hi @PickleRick You are quite right - this will teach me for trying to do too many things at once as was also doing some INGEST_EVAL work at the same time. I've removed the completely incorrect ...
See more...
Hi @PickleRick You are quite right - this will teach me for trying to do too many things at once as was also doing some INGEST_EVAL work at the same time. I've removed the completely incorrect start to the paragraph about DMs and will update the sentence around "can be accelerated" to include details about what this achieves. Thanks again for catching those points!
Hello @ITWhisperer , First of all thanks for spending time on it .I am trying running simple query ( not subsearch ) as follows still it is not running . Basically I am trying to understand how do we...
See more...
Hello @ITWhisperer , First of all thanks for spending time on it .I am trying running simple query ( not subsearch ) as follows still it is not running . Basically I am trying to understand how do we calculate any parameter value while query runs . index="aws_np" earliest="12/03/2025:13:00" latest=[| makeresults
| eval earliest=strptime("12/03/2025 13:00","%d/%m/%Y %H:%M")
| eval latest=relative_time(earliest,"+1d")
| table latest]
| rex field=_raw "messageGUID\": String\(\"(?<messageGUID>[^\"]+)"
| rex field=_raw "source\": String\(\"(?<source>[^\"]+)"
| where type="Contact"
| stats count by source Error in 'search' command: Unable to parse the search: Comparator '=' has an invalid term on the right hand side: (latest = "1741885200.000000"). how can I solve this
Hi @ViewCelia Dashboard panels with "real-time" time ranges can sometimes stall due to browser resource limits or Splunk server load. Switching from real-time to a short rolling window (e.g., "Last ...
See more...
Hi @ViewCelia Dashboard panels with "real-time" time ranges can sometimes stall due to browser resource limits or Splunk server load. Switching from real-time to a short rolling window (e.g., "Last 15 minutes") with a scheduled refresh often improves reliability. Set the panel search to "Last 15 minutes" (relative time, not real-time), then configure the dashboard or panel to auto-refresh every 1 minute. This approach fetches recent data on each refresh instead of relying on continuous real-time streaming. Real-time searches consume more resources and can be less stable in dashboards. Use scheduled refreshes with relative time for better performance. Check out the following docs page about realtime searches (https://docs.splunk.com/Documentation/Splunk/9.4.1/Search/Realtimeperformanceandlimitations#:~:text=or%20different%20users.-,Concurrent%20real%2Dtime%20searches,-Running%20multiple%20real) And also an interesting Splunk Answers post about the use of realtime - https://community.splunk.com/t5/Random/Why-are-realtime-searches-disliked-in-the-Splunk-world/m-p/449682 Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thanks Giuseppe, so my search is as follows: index=sample1 sourcetype=x host=host1 (action=200 OR action=400)
| stats values(caller) as caller by callid
| stats count as all_calls by caller
| re...
See more...
Thanks Giuseppe, so my search is as follows: index=sample1 sourcetype=x host=host1 (action=200 OR action=400)
| stats values(caller) as caller by callid
| stats count as all_calls by caller
| rename caller as caller_party
| eval caller_party=substr(caller_party, 2)
| appendcols
[ search index=sample1 AND sourcetype=y
| stats count as messagebank_calls by caller_party]
| search all_calls=* Note how the base search has a few conditions on it, so in the final result I would only want the callers that satisfy the condition and has a matching record in sourcetype=y.
AppDynamics Native Agent Not Loading with Nexe-bundled NodeJS Application Environment - Node.js version: 20.9.0 - AppDynamics Node.js Agent version: Latest - Nexe version: Latest - OS: Windows S...
See more...
AppDynamics Native Agent Not Loading with Nexe-bundled NodeJS Application Environment - Node.js version: 20.9.0 - AppDynamics Node.js Agent version: Latest - Nexe version: Latest - OS: Windows Server - Build tool: Nexe with custom build script Project Structure We have a Node.js API service that uses: - Express.js for REST API - Native modules (specifically AppDynamics agent) - Various npm packages for business logic - Built and distributed as a single executable using Nexe Issue When running our Nexe-bundled application, the AppDynamics agent fails to initialize with the following error: Appdynamics agent cannot be initialized due to Error: Missing required module. \\?\C:\Path\nodejs_api\node_modules\appdynamics-libagent-napi\appd_libagent.node TypeError: Cannot read properties of undefined (reading 'init') at LibagentConnector.init What We've Tried 1. Including the native module as a resource in Nexe build: --resource "./node_modules/appdynamics-libagent-napi/appd_libagent.node" 2. Copying the native module to the correct directory structure in distribution: dist/ - api_node.exe - node_modules/ -- appdynamics-libagent-napi/ --- appd_libagent.node According to Nexe's documentation, when dealing with native modules (.node files), the module should be placed in the `node_modules` directory relative to the compiled executable's location. This means that while the application is bundled into a single executable, native modules are expected to be loaded from the filesystem at runtime. Question How can we properly bundle and load the AppDynamics native agent with a Nexe-compiled Node.js application? Is there a specific configuration or approach needed for native modules, particularly AppDynamics, to work with Nexe? Any guidance or working examples would be greatly appreciated.
Hey everyone, I’m working on a Splunk dashboard where one of the panels is supposed to show real-time logins from a specific source. The search runs fine when I do it manually in the search bar, but...
See more...
Hey everyone, I’m working on a Splunk dashboard where one of the panels is supposed to show real-time logins from a specific source. The search runs fine when I do it manually in the search bar, but when it’s in the dashboard panel, it doesn’t seem to update properly unless I refresh the whole page. The time picker is set to "Last 15 minutes (real-time)" and the auto-refresh is on, but the panel still gets stuck with old data sometimes. Has anyone run into something like this before? Could it be a refresh interval issue or something in the panel settings I’m missing? Thanks in advance for any tips! Looking to enhance the durability and appearance of your concrete surfaces? Contact Concrete Contractors Richmond VA today for expert concrete lifting, repair, and resurfacing services! Get your free estimate now!
CSS can be added to the source of a simpleXML / classic dashboard (not Dashboard Studio). This is a very old post - you should start your own question which more specific information about your parti...
See more...
CSS can be added to the source of a simpleXML / classic dashboard (not Dashboard Studio). This is a very old post - you should start your own question which more specific information about your particular usecase so we can give you more targeted and relevant information.
Hi Thanks, this looks will work, But how this will apply specific to this dashboard and also is there any way easily use CSS from UI itself without doing changes in the backend?
Subsearches are executed before the main search so when the appended search is executed the field is not available. Without the full search, I cannot determine where the error might be coming from. T...
See more...
Subsearches are executed before the main search so when the appended search is executed the field is not available. Without the full search, I cannot determine where the error might be coming from. The basic concept of using make results to provide new values for earliest and latest can be demonstrated to work with the following complete search | makeresults
| eval line="First"
| append
[search index=_internal
[| makeresults
| eval earliest=strptime("12/03/2025 13:00","%d/%m/%Y %H:%M")
| eval latest=relative_time(earliest,"+1d")
| table earliest latest] sourcetype=splunkd
| head 1
| eval line="second"]
Also I tried it like [ search index="aws_np" [| makeresults
| eval earliest=strptime("12/03/2025","%d/%m/%Y")
| eval latest=relative_time(earliest,"+1d")
| table earliest latest] host...
See more...
Also I tried it like [ search index="aws_np" [| makeresults
| eval earliest=strptime("12/03/2025","%d/%m/%Y")
| eval latest=relative_time(earliest,"+1d")
| table earliest latest] host="test" app_environment=qa
| rex field=_raw "messageGUID\": String\(\"(?<messageGUID>[^\"]+)" but getting below error : Error in 'search' command: Unable to parse the search: 'AND' operator is missing a clause on the left hand side.
Hello @ITWhisperer | eval latest_time=strptime("03/12/2025:12:30:00", "%m/%d/%Y:%H:%M:%S")
| eval new_latest_time=(latest_time + 18000) ``` 18000 seconds = 5 hrs ```
| eval new_latest_time_str=strftime(new_latest_time, "%m/%d/%Y:%H:%M:%S")
| append
```query for event brigdel```
[ search index="aws_np" earliest="03/12/2025:12:30:00" latest=$new_latest_time_str$ host="EventConsumer-mdm I tried writing it as above ( query is not complete ). Just wanted to share I tried evaluating values and then trying using within sub search but it is giving following error : Invalid value "$new_latest_time_str$" for time term 'latest'
One comment more. If you are already indexing that amount of data with so few indexers I’m really surprised that you have ingestion based license! Especially when you normal amount is “small” but tim...
See more...
One comment more. If you are already indexing that amount of data with so few indexers I’m really surprised that you have ingestion based license! Especially when you normal amount is “small” but time by time DDoS can double those, I propose that you should ask CPU based (svc in cloud, some other name in onprem) licensing model. Anyhow as other said you must rearchitect your environment and add nodes and disk base based on your average daily usage and needed retention time and queries needed to run. For that you need someone local person to discuss your scenarios and needs.