All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

While ingesting files from network shares is possible (but has performance drawbacks especially in high-volume scenarios) it requires the ingesting component (either a HF or UF) to run with a domain ... See more...
While ingesting files from network shares is possible (but has performance drawbacks especially in high-volume scenarios) it requires the ingesting component (either a HF or UF) to run with a domain user which has access to the source share. Maybe, just maybe it could work with a completely public share (haven't tested it myself) but it's not a very good idea in a first place.
To increase the 10 MB limit, you'll need to change the MAXIMUM_EDITABLE_SIZE value in the settings.py file found in this directory:  /opt/splunk/etc/apps/lookup_editor/bin/lookup_editor  
Trying to monitor a separate print server folder outside where Splunk is hosted with print logs that has a UNC path.  Folder only has .log files in it.  I have the following index created: index = ... See more...
Trying to monitor a separate print server folder outside where Splunk is hosted with print logs that has a UNC path.  Folder only has .log files in it.  I have the following index created: index = printlogs When I try to add the folder path in Splunk through the add data feature: "add data" - "Monitor" -"Files & Directories" I get to submit and then get an error: "Parameter name:  Path must be absolute". So I added the following stanza to my inputs.conf file in the systems/local/folder: [monitor://\\cpn-prt01\c$\Program Files\Printer\server\logs\print-logs\*.log] index = printlogs host = cpn-prt01 disabled = 0 renderXml = 1 I created a second stanza with a index = printlogs2 with respective index to monitor the following path to see if I can pull straight from the path and ignore the file type inside. [monitor://\\cpn-prt01\c$\Program Files\Printer\server\logs\print-logs\] I do see the full path to both in the "Files & Director" list under the Data Inputs.  However, I am not getting any event counts when I look at the respective indexes seen in the Splunk Indexes page.   I did a Splunk refresh and even restarted the Splunk server with now luck.   Thought maybe someone has run into similar issue or has a possible solution.   Thanks in advance.
Monitoring console doesn't "log" anything. It's a collection of dashboards processing data from Splunk's internal indexes and REST calls to your Splunk components (and keeps a bit of state data in in... See more...
Monitoring console doesn't "log" anything. It's a collection of dashboards processing data from Splunk's internal indexes and REST calls to your Splunk components (and keeps a bit of state data in internal storage - like a list of forwarders). - this is the part already covered by others. But the other important point in this topic is that rarely using a tool to monitor itself is a good idea. That's why you have external monitoring solutions and generally you'd rather want an external tool checking - for example - web interface availability or server's performance metrics periodically. If you want to get something from Splunk's internal logs... well, you can find _something_ but that won't actually tell you if the service was available, healthy and was perfofming well enough.
Hello,  I am running two separate queries to extract values: First query   index=abc status=error | stats count AS FailCount   Second query   index=abc status=planning | stats count AS TotalP... See more...
Hello,  I am running two separate queries to extract values: First query   index=abc status=error | stats count AS FailCount   Second query   index=abc status=planning | stats count AS TotalPlanned   Both queries are working well and giving expected results.  When I combine them using sub search, I am getting error:   index=abc status=error | stats count AS FailCount [ search index=abc status=planning | stats count AS TotalPlanned | table TotalPlanned ] | eval percentageFailed=(FailCount/TotalPlanned)*100   Error message:   Error in 'stats' command: The argument '(( TotalPlanned=761 )) is invalid'   Note: The count 761 is a valid count for TotalPlanned, so it did perform that calculation. 
You are right, I had issues in my raw data. Thanks very much for the help!!
WINNER WINNER! Thank you very much!  
User receiving duplicated field names in splunk result for example when i run a search i get an output for the       field1=Value1 and then when the user runs the same search he gets an output of f... See more...
User receiving duplicated field names in splunk result for example when i run a search i get an output for the       field1=Value1 and then when the user runs the same search he gets an output of field1 = "field1=value1" Does any one knows what i need to do to help the user get the same result as mine 
Are you still using a 6.5 forwarder in 2024? I suppose not. Try starting a new thread with a more detailed description of your problem.
Well, the more stronger assumptions you can make about the data the easier it gets. If you have to care about exceptions it's getting complicated quickly. Take a typical example of key="string valu... See more...
Well, the more stronger assumptions you can make about the data the easier it gets. If you have to care about exceptions it's getting complicated quickly. Take a typical example of key="string value" or key=number_value The latter form is obviously pretty easy to parse (?<key>\S+)=(?<value>\d+) The former is way more complicated to do well. If you simply do (?<key>\S+)="(?<value>.*)" You'll capture way over the boundary of the k-v pair if you have many of them. If you go lazy (?<key>\S+)="(?<value>.*?)" you'll stop matching in the middle if you have an escaped quote within your value. (The same happens if instead of matching for .* lazily you match for [^"]* because of course you encounter the same quote). Ok. So how about we match for everything not being a quote or a quote preceeded by a backslash. (?<key>\S+)="(?<value>([^"]|\\")*)" Ok. Nice idea but what if your value ends with a backslash (expressed as double backslash due to escaping)? You'll miss it because it has the \" sequence and go way past the end of the value. So maybe we should try making a negative lookback so that a backslash preceeding a quote cannot be prepended by another backslash? But how about three, four and so on backslashes ended with a quote? It's getting messier and messier. If you can either know for sure that your data will never look like those border cases or can conciously decide that you don't care about those cases and can live with the fact that they will be extracted wrongly it's way easier. In your case that would be, for example a message containing comment=This is actually an example of a key=value pair. Another_field=Another value or groceries=Let's buy bread, strawberries, butter and pork loin. As you can see without additional assumptions about the data the extractions will be wrong.  
Hi @Tiong.Koh  I executed the command in CMD console in the server and the output was the attached file, the name don´t change after that, the command was the next: java -Djava.library.path="E:\A... See more...
Hi @Tiong.Koh  I executed the command in CMD console in the server and the output was the attached file, the name don´t change after that, the command was the next: java -Djava.library.path="E:\AppDynamics_DB" -Ddbagent.name="DBAgent01" -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector -jar E:\AppDynamics_DB\db-agent.jar I don't know where is the problem or if I did something wrong
I was able to get it to work with this final query: index="jsm_issues" | dedup _time key | timechart count by fields.status.name | foreach * [eval <<FIELD>> = if('<<FIELD>>' > 0, '<<FIELD>>', n... See more...
I was able to get it to work with this final query: index="jsm_issues" | dedup _time key | timechart count by fields.status.name | foreach * [eval <<FIELD>> = if('<<FIELD>>' > 0, '<<FIELD>>', null())] | filldown * Thank you so much for your answer.
Hello Did you manage to resolve, I am facing a similar problem with my intermediate forwarders crushing. I have enough memory 16GB
I have not seen value with equal sign. I am open for suggestions to pick a better delimiter. There are some values with space. Would this be a problem? I can certainly improve the structure. comma... See more...
I have not seen value with equal sign. I am open for suggestions to pick a better delimiter. There are some values with space. Would this be a problem? I can certainly improve the structure. comma-space is a multivalued field. There are two such fields. Of which, I would need to compute high frequency value at a later stage. I can modify multivalued field [list of data] to something like this  https://www.splunk.com/en_us/blog/tips-and-tricks/delimiter-base-kv-extraction-advanced.html?locale=en_us  FIELDS= "time", "client-ip", "cs-method", "sc-status" @PickleRick Let me know the if there are any changes to be make to log.message to make it "full-preg" so that Splunk can deliver          
I tried the exact same thing just with my url.  It did not pass any tokens.  Permissions have been double and tripe checked.  IS there a limit to how many tokens can be passed.  I am trying to pass t... See more...
I tried the exact same thing just with my url.  It did not pass any tokens.  Permissions have been double and tripe checked.  IS there a limit to how many tokens can be passed.  I am trying to pass the current dashboard a user may be on so they can share the link with all the tokens(filters) applied.
If you are doing clean installation and want to use old node information you must restore at least splunk/etc directory. But then there could be some conflict with buckets etc. For that reason I pref... See more...
If you are doing clean installation and want to use old node information you must restore at least splunk/etc directory. But then there could be some conflict with buckets etc. For that reason I prefer remove, clean up and create a new node which add to this cluster.  If you are using that migrate on place Centos2RHEL as @johnhuang told, then if you have possibility to do that as offline it’s probably the safest option? If you cannot do it as offline, then maybe you could try to put CM in maintenance mode, then update/migrate one node then sync cluster, put it again maintenance mode and continue with next node etc. This could work, but it’s best if you can test it with lab/test environment? I don’t take any responsibility of these instructions as I haven’t done this by myself! Hard to do any estimates for how long it takes as it’s depending for your hardware, disk speed network etc.
A lot of that query wasn't cleaned up from previous exploration queries, but thanks for the response.  It looks like your suggestion is almost working for me except that the statement errors on the c... See more...
A lot of that query wasn't cleaned up from previous exploration queries, but thanks for the response.  It looks like your suggestion is almost working for me except that the statement errors on the columns that are multi-worded | foreach * [eval <<FIELD>> = if(<<FIELD>> > 0, <<FIELD>>, null())]    
when we combined these lookups as shown in the query you shared, the results only reflected matches from the second lookup, meaning only the IP addresses were being compared. Additionally,  My m... See more...
when we combined these lookups as shown in the query you shared, the results only reflected matches from the second lookup, meaning only the IP addresses were being compared. Additionally,  My mistake again.  When using the same output name, the second lookup overrides the first.  Use outputnew in the second. index=A sourcetype="Any" | fields "IP address" Hostname OS | dedup "IP address" Hostname OS | eval Hostname = lower(Hostname) | lookup inventory.csv Reporting_Host as Hostname output Reporting_Host as match | lookup inventory.csv Reporting_Host as "IP address" OUTPUTNEW Reporting_Host as match | eval match = if(isnull(match), "missing", "ok") | table Hostname "IP address" OS match  
Can you explain the physical significance of "last valid count"?  If you fill those zero values with any given non-zero value, where do you set the boundary?  The very first zero following a previous... See more...
Can you explain the physical significance of "last valid count"?  If you fill those zero values with any given non-zero value, where do you set the boundary?  The very first zero following a previous non-zero? Why is that a valid representation of your data? If you mean to simply connect non-zero values with a line, just set those 0 to null. index="jsm_issues" | dedup _time key | timechart count by fields.status.name | foreach * [eval <<FIELD>> = if(<<FIELD>> > 0, <<FIELD>>, null())] (Two pointers: When using timechart, there is no need to sort _time.  Also I don't see a point of count(fields.status.name) when groupby is the field itself. Then, in Visualization -> Format, set Null values to connect Here is an emulation. index=_internal sourcetype=splunkd thread_name=* earliest=-1h@h latest=-0h@h-30m | timechart count by thread_name | foreach * [eval <<FIELD>> = if(<<FIELD>> > 100, sqrt(<<FIELD>>), 0)] ``` the above emulates index="jsm_issues" | dedup _time key | timechart count by fields.status.name ``` Without setting 0 to null: Set 0 to null without connecting dots Connect the dots  
Hi @yuanliu  Thank you for your feedback! I will definitely look into the performance issue and plan for further improvements. Regarding the query, I tried it out, and here’s how it’s working: In... See more...
Hi @yuanliu  Thank you for your feedback! I will definitely look into the performance issue and plan for further improvements. Regarding the query, I tried it out, and here’s how it’s working: Index A contains around 70k assets, which serves as our asset inventory. Some hosts in this index have multiple IP addresses assigned to them. Index B has just the hostname, but this can include a mix of IP addresses, FQDNs, and hostnames. When I ran the query, the first lookup compared the Reporting_Host with the hostnames in Index A and determined whether there was a match. The second lookup compared the Reporting_Host against the IP addresses in Index A to check for matches. However, when we combined these lookups as shown in the query you shared, the results only reflected matches from the second lookup, meaning only the IP addresses were being compared. Additionally, since a host in Index A has multiple IP addresses, the query gives a match for the IP address that corresponds, but for the remaining IP addresses associated with that host, it shows them as missing.