Thank you @yuanliu Had to modify a little to make it work | rename log.message as _raw
| rex mode=sed "s/errors=(.+) fields=(.+)/errors=\"\1\" fields=\"\2\"/"
| rex field=_raw "path=(?P<path...
See more...
Thank you @yuanliu Had to modify a little to make it work | rename log.message as _raw
| rex mode=sed "s/errors=(.+) fields=(.+)/errors=\"\1\" fields=\"\2\"/"
| rex field=_raw "path=(?P<path>.*) feedType=(?P<feedType>.*) sku=(?P<sku>.*) status=(?P<status>.*) errorCount=(?P<errorCount>.*) errors=(?P<errors>.*) fields=(?P<fields>.*)"
| table path, feedType, sku, status, errorCount, errors, fields
1. I need to dedup servers in first query. Count only Unique Servers.
2. I need to consider all the Servers with duplicates in the second query and then CompletedStatus then count the completed. ...
See more...
1. I need to dedup servers in first query. Count only Unique Servers.
2. I need to consider all the Servers with duplicates in the second query and then CompletedStatus then count the completed.
|.
index=abc source="/opt/src/datasource.tmp" | dedup _raw | dedup Servers | table Servers | stats count(Servers) as Total
||.
index=abc source="/opt/src/datasource.tmp" | dedup _raw | table Servers, CompletedStatus | stats count(CompletedStatus) as Completed
So 2 queries are compulsory.
But since transaction is one of the cursed commands you can - assuming Job is unique - do | stats min(_time) as start max(_time) as end by Job | eval duration=end-start
1. Each of your searches is unnecessarily complicated (and thus less efficient than it could be). You can cut the whole dedup and table part and get the same results. 2. Are you sure that count(Serv...
See more...
1. Each of your searches is unnecessarily complicated (and thus less efficient than it could be). You can cut the whole dedup and table part and get the same results. 2. Are you sure that count(Servers) and count(CompletedServers) respectively is what you need? It might be - I don't know your use case - but typically it's either the general count of results or distinct count of a field. Counts of fields are rare the proper solution. But there are uses for it. Yours might be one of them so just asking if you know what you're doing. 3. You don't have to join anything. Assuming the counts are OK, you simply need one search to calculate both stats index=abc source="/opt/src/datasource.tmp" stats count(CompletedServers) as Completed count(Servers) as Total And that's it.
Hi Sir, I have two queries like below and I need to join both of queries and then divide Completed by Total and calculate percentage and display the percentage in bracket along with Completed count ...
See more...
Hi Sir, I have two queries like below and I need to join both of queries and then divide Completed by Total and calculate percentage and display the percentage in bracket along with Completed count as shown in the above screenshot in panel. 1. index=abc source="/opt/src/datasource.tmp" | dedup _raw | table Servers | stats count(Servers) as Total 2. index=abc source="/opt/src/datasource.tmp" | dedup _raw | table CompletedServers | stats count(CompletedServers) as Completed
Assuming the timestamp you want is in _time, you could use transaction to get the duration | transaction Job If not, you could reassign the _time field to be the time you want.
My query returns these events, i need to compute the total time A was in this state and total time B was in this state. My thought is to subtract the TImestamp of the first A from the most recent A a...
See more...
My query returns these events, i need to compute the total time A was in this state and total time B was in this state. My thought is to subtract the TImestamp of the first A from the most recent A and so on for B but cant figure out the right way to do this? Timestamp Job Date LoggedTime Ready 1728092168.000000 A 10/4/2024 21:36:03 1 1728092163.000000 A 10/4/2024 21:35:50 1 1728092150.000000 A 10/4/2024 21:35:27 1 1728092127.000000 A 10/4/2024 21:35:16 1 1728090335.000000 B 10/4/2024 21:05:15 2 1728090315.000000 B 10/4/2024 21:05:03 2 1728090303.000000 B 10/4/2024 21:04:53 2 1728090293.000000 B 10/4/2024 21:04:31 2
If the same search on the same data ran within the same app (are you running both searches from the same app?) yields different results for two different users there must be some difference in config...
See more...
If the same search on the same data ran within the same app (are you running both searches from the same app?) yields different results for two different users there must be some difference in configuration. It can be either due to one of the users having custom settings defined on a per user level or difference in permissions to the app the settings (probably either extractions or calculated fields) are defined in. Compare settings for relevant sourcetype with app and user context using btool.
Did you create any custom field extraction? If so, check if the field extraction's permissions are set to "global." It might currently be private to you, which could explain why only you're getting t...
See more...
Did you create any custom field extraction? If so, check if the field extraction's permissions are set to "global." It might currently be private to you, which could explain why only you're getting the correct results.
Hi @whitecat001 ... this looks like a mistaken eval field assignment or table printing issue. pls share with us your search query(remove any sensitive details) and/or the other user's search query....
See more...
Hi @whitecat001 ... this looks like a mistaken eval field assignment or table printing issue. pls share with us your search query(remove any sensitive details) and/or the other user's search query. then troubleshooting this will become easy one, thanks.
It may help to think of a subsearch like a macro. Just as the contents of a macro replace the macro name in a query, so, too, do the results of a subsearch replace the subsearch text in the query. ...
See more...
It may help to think of a subsearch like a macro. Just as the contents of a macro replace the macro name in a query, so, too, do the results of a subsearch replace the subsearch text in the query. Therefore, it's important that the results of the subsearch make sense, semantically. In the example query, once the subsearch completes, Splunk tries to execute this index=abc status=error
| stats count AS FailCount
(( TotalPlanned=761 ))
| eval percentageFailed=(FailCount/TotalPlanned)*100 which is not a valid query. One fix is to use the appendcols command with the subsearch index=abc status=error
| stats count AS FailCount
| appendcols [ search index=abc status=planning
| stats count AS TotalPlanned
| table TotalPlanned ]
| eval percentageFailed=(FailCount/TotalPlanned)*100
While ingesting files from network shares is possible (but has performance drawbacks especially in high-volume scenarios) it requires the ingesting component (either a HF or UF) to run with a domain ...
See more...
While ingesting files from network shares is possible (but has performance drawbacks especially in high-volume scenarios) it requires the ingesting component (either a HF or UF) to run with a domain user which has access to the source share. Maybe, just maybe it could work with a completely public share (haven't tested it myself) but it's not a very good idea in a first place.
To increase the 10 MB limit, you'll need to change the MAXIMUM_EDITABLE_SIZE value in the settings.py file found in this directory: /opt/splunk/etc/apps/lookup_editor/bin/lookup_editor
Trying to monitor a separate print server folder outside where Splunk is hosted with print logs that has a UNC path. Folder only has .log files in it. I have the following index created:
index = ...
See more...
Trying to monitor a separate print server folder outside where Splunk is hosted with print logs that has a UNC path. Folder only has .log files in it. I have the following index created:
index = printlogs
When I try to add the folder path in Splunk through the add data feature: "add data" - "Monitor" -"Files & Directories" I get to submit and then get an error:
"Parameter name: Path must be absolute".
So I added the following stanza to my inputs.conf file in the systems/local/folder:
[monitor://\\cpn-prt01\c$\Program Files\Printer\server\logs\print-logs\*.log]
index = printlogs
host = cpn-prt01
disabled = 0
renderXml = 1
I created a second stanza with a index = printlogs2 with respective index to monitor the following path to see if I can pull straight from the path and ignore the file type inside.
[monitor://\\cpn-prt01\c$\Program Files\Printer\server\logs\print-logs\]
I do see the full path to both in the "Files & Director" list under the Data Inputs. However, I am not getting any event counts when I look at the respective indexes seen in the Splunk Indexes page. I did a Splunk refresh and even restarted the Splunk server with now luck. Thought maybe someone has run into similar issue or has a possible solution.
Thanks in advance.
Monitoring console doesn't "log" anything. It's a collection of dashboards processing data from Splunk's internal indexes and REST calls to your Splunk components (and keeps a bit of state data in in...
See more...
Monitoring console doesn't "log" anything. It's a collection of dashboards processing data from Splunk's internal indexes and REST calls to your Splunk components (and keeps a bit of state data in internal storage - like a list of forwarders). - this is the part already covered by others. But the other important point in this topic is that rarely using a tool to monitor itself is a good idea. That's why you have external monitoring solutions and generally you'd rather want an external tool checking - for example - web interface availability or server's performance metrics periodically. If you want to get something from Splunk's internal logs... well, you can find _something_ but that won't actually tell you if the service was available, healthy and was perfofming well enough.