All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

No, I am using 9.1 and 9.3. However, I am experiencing the same problem on Red Hat 8. I will start a new thread as per your advice. 
Hi @whitecat001 ... this looks like a mistaken eval field assignment or table printing issue.  pls share with us your search query(remove any sensitive details) and/or the other user's search query.... See more...
Hi @whitecat001 ... this looks like a mistaken eval field assignment or table printing issue.  pls share with us your search query(remove any sensitive details) and/or the other user's search query. then troubleshooting this will become easy one, thanks. 
It may help to think of a subsearch like a macro.  Just as the contents of a macro replace the macro name in a query, so, too, do the results of a subsearch replace the subsearch text in the query.  ... See more...
It may help to think of a subsearch like a macro.  Just as the contents of a macro replace the macro name in a query, so, too, do the results of a subsearch replace the subsearch text in the query.  Therefore, it's important that the results of the subsearch make sense, semantically. In the example query, once the subsearch completes, Splunk tries to execute this index=abc status=error | stats count AS FailCount (( TotalPlanned=761 )) | eval percentageFailed=(FailCount/TotalPlanned)*100 which is not a valid query. One fix is to use the appendcols command with the subsearch index=abc status=error | stats count AS FailCount | appendcols [ search index=abc status=planning | stats count AS TotalPlanned | table TotalPlanned ] | eval percentageFailed=(FailCount/TotalPlanned)*100  
Out of curiosity - why foreach? mvmap() wouldn't suffice? I find foreach... a bit unsplunky
While ingesting files from network shares is possible (but has performance drawbacks especially in high-volume scenarios) it requires the ingesting component (either a HF or UF) to run with a domain ... See more...
While ingesting files from network shares is possible (but has performance drawbacks especially in high-volume scenarios) it requires the ingesting component (either a HF or UF) to run with a domain user which has access to the source share. Maybe, just maybe it could work with a completely public share (haven't tested it myself) but it's not a very good idea in a first place.
To increase the 10 MB limit, you'll need to change the MAXIMUM_EDITABLE_SIZE value in the settings.py file found in this directory:  /opt/splunk/etc/apps/lookup_editor/bin/lookup_editor  
Trying to monitor a separate print server folder outside where Splunk is hosted with print logs that has a UNC path.  Folder only has .log files in it.  I have the following index created: index = ... See more...
Trying to monitor a separate print server folder outside where Splunk is hosted with print logs that has a UNC path.  Folder only has .log files in it.  I have the following index created: index = printlogs When I try to add the folder path in Splunk through the add data feature: "add data" - "Monitor" -"Files & Directories" I get to submit and then get an error: "Parameter name:  Path must be absolute". So I added the following stanza to my inputs.conf file in the systems/local/folder: [monitor://\\cpn-prt01\c$\Program Files\Printer\server\logs\print-logs\*.log] index = printlogs host = cpn-prt01 disabled = 0 renderXml = 1 I created a second stanza with a index = printlogs2 with respective index to monitor the following path to see if I can pull straight from the path and ignore the file type inside. [monitor://\\cpn-prt01\c$\Program Files\Printer\server\logs\print-logs\] I do see the full path to both in the "Files & Director" list under the Data Inputs.  However, I am not getting any event counts when I look at the respective indexes seen in the Splunk Indexes page.   I did a Splunk refresh and even restarted the Splunk server with now luck.   Thought maybe someone has run into similar issue or has a possible solution.   Thanks in advance.
Monitoring console doesn't "log" anything. It's a collection of dashboards processing data from Splunk's internal indexes and REST calls to your Splunk components (and keeps a bit of state data in in... See more...
Monitoring console doesn't "log" anything. It's a collection of dashboards processing data from Splunk's internal indexes and REST calls to your Splunk components (and keeps a bit of state data in internal storage - like a list of forwarders). - this is the part already covered by others. But the other important point in this topic is that rarely using a tool to monitor itself is a good idea. That's why you have external monitoring solutions and generally you'd rather want an external tool checking - for example - web interface availability or server's performance metrics periodically. If you want to get something from Splunk's internal logs... well, you can find _something_ but that won't actually tell you if the service was available, healthy and was perfofming well enough.
Hello,  I am running two separate queries to extract values: First query   index=abc status=error | stats count AS FailCount   Second query   index=abc status=planning | stats count AS TotalP... See more...
Hello,  I am running two separate queries to extract values: First query   index=abc status=error | stats count AS FailCount   Second query   index=abc status=planning | stats count AS TotalPlanned   Both queries are working well and giving expected results.  When I combine them using sub search, I am getting error:   index=abc status=error | stats count AS FailCount [ search index=abc status=planning | stats count AS TotalPlanned | table TotalPlanned ] | eval percentageFailed=(FailCount/TotalPlanned)*100   Error message:   Error in 'stats' command: The argument '(( TotalPlanned=761 )) is invalid'   Note: The count 761 is a valid count for TotalPlanned, so it did perform that calculation. 
You are right, I had issues in my raw data. Thanks very much for the help!!
WINNER WINNER! Thank you very much!  
User receiving duplicated field names in splunk result for example when i run a search i get an output for the       field1=Value1 and then when the user runs the same search he gets an output of f... See more...
User receiving duplicated field names in splunk result for example when i run a search i get an output for the       field1=Value1 and then when the user runs the same search he gets an output of field1 = "field1=value1" Does any one knows what i need to do to help the user get the same result as mine 
Are you still using a 6.5 forwarder in 2024? I suppose not. Try starting a new thread with a more detailed description of your problem.
Well, the more stronger assumptions you can make about the data the easier it gets. If you have to care about exceptions it's getting complicated quickly. Take a typical example of key="string valu... See more...
Well, the more stronger assumptions you can make about the data the easier it gets. If you have to care about exceptions it's getting complicated quickly. Take a typical example of key="string value" or key=number_value The latter form is obviously pretty easy to parse (?<key>\S+)=(?<value>\d+) The former is way more complicated to do well. If you simply do (?<key>\S+)="(?<value>.*)" You'll capture way over the boundary of the k-v pair if you have many of them. If you go lazy (?<key>\S+)="(?<value>.*?)" you'll stop matching in the middle if you have an escaped quote within your value. (The same happens if instead of matching for .* lazily you match for [^"]* because of course you encounter the same quote). Ok. So how about we match for everything not being a quote or a quote preceeded by a backslash. (?<key>\S+)="(?<value>([^"]|\\")*)" Ok. Nice idea but what if your value ends with a backslash (expressed as double backslash due to escaping)? You'll miss it because it has the \" sequence and go way past the end of the value. So maybe we should try making a negative lookback so that a backslash preceeding a quote cannot be prepended by another backslash? But how about three, four and so on backslashes ended with a quote? It's getting messier and messier. If you can either know for sure that your data will never look like those border cases or can conciously decide that you don't care about those cases and can live with the fact that they will be extracted wrongly it's way easier. In your case that would be, for example a message containing comment=This is actually an example of a key=value pair. Another_field=Another value or groceries=Let's buy bread, strawberries, butter and pork loin. As you can see without additional assumptions about the data the extractions will be wrong.  
Hi @Tiong.Koh  I executed the command in CMD console in the server and the output was the attached file, the name don´t change after that, the command was the next: java -Djava.library.path="E:\A... See more...
Hi @Tiong.Koh  I executed the command in CMD console in the server and the output was the attached file, the name don´t change after that, the command was the next: java -Djava.library.path="E:\AppDynamics_DB" -Ddbagent.name="DBAgent01" -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector -jar E:\AppDynamics_DB\db-agent.jar I don't know where is the problem or if I did something wrong
I was able to get it to work with this final query: index="jsm_issues" | dedup _time key | timechart count by fields.status.name | foreach * [eval <<FIELD>> = if('<<FIELD>>' > 0, '<<FIELD>>', n... See more...
I was able to get it to work with this final query: index="jsm_issues" | dedup _time key | timechart count by fields.status.name | foreach * [eval <<FIELD>> = if('<<FIELD>>' > 0, '<<FIELD>>', null())] | filldown * Thank you so much for your answer.
Hello Did you manage to resolve, I am facing a similar problem with my intermediate forwarders crushing. I have enough memory 16GB
I have not seen value with equal sign. I am open for suggestions to pick a better delimiter. There are some values with space. Would this be a problem? I can certainly improve the structure. comma... See more...
I have not seen value with equal sign. I am open for suggestions to pick a better delimiter. There are some values with space. Would this be a problem? I can certainly improve the structure. comma-space is a multivalued field. There are two such fields. Of which, I would need to compute high frequency value at a later stage. I can modify multivalued field [list of data] to something like this  https://www.splunk.com/en_us/blog/tips-and-tricks/delimiter-base-kv-extraction-advanced.html?locale=en_us  FIELDS= "time", "client-ip", "cs-method", "sc-status" @PickleRick Let me know the if there are any changes to be make to log.message to make it "full-preg" so that Splunk can deliver          
I tried the exact same thing just with my url.  It did not pass any tokens.  Permissions have been double and tripe checked.  IS there a limit to how many tokens can be passed.  I am trying to pass t... See more...
I tried the exact same thing just with my url.  It did not pass any tokens.  Permissions have been double and tripe checked.  IS there a limit to how many tokens can be passed.  I am trying to pass the current dashboard a user may be on so they can share the link with all the tokens(filters) applied.
If you are doing clean installation and want to use old node information you must restore at least splunk/etc directory. But then there could be some conflict with buckets etc. For that reason I pref... See more...
If you are doing clean installation and want to use old node information you must restore at least splunk/etc directory. But then there could be some conflict with buckets etc. For that reason I prefer remove, clean up and create a new node which add to this cluster.  If you are using that migrate on place Centos2RHEL as @johnhuang told, then if you have possibility to do that as offline it’s probably the safest option? If you cannot do it as offline, then maybe you could try to put CM in maintenance mode, then update/migrate one node then sync cluster, put it again maintenance mode and continue with next node etc. This could work, but it’s best if you can test it with lab/test environment? I don’t take any responsibility of these instructions as I haven’t done this by myself! Hard to do any estimates for how long it takes as it’s depending for your hardware, disk speed network etc.