All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ITWhisperer  Actually below query was not giving the results that is why error field was not populated. | xpath outfield=MIS_Address "//MIS_Address"  I have removed above query and replaced by "|... See more...
@ITWhisperer  Actually below query was not giving the results that is why error field was not populated. | xpath outfield=MIS_Address "//MIS_Address"  I have removed above query and replaced by "| rex field=_raw "eqtext\:MIS_Address\>(?<error>.+)\<\/eqtext\:MIS_Address" and error field are now populating. But now I am seeing another issue. As per the below screenshot only one count is visible from "error_description"  as error count is 11 or description count is 100+          
Hi all, I am currently testing the Http Event Collector (HEC) with a Splunk Cloud trial account. All I do is post data to the HEC url, and It works perfectly for a local instance for an Enterprise a... See more...
Hi all, I am currently testing the Http Event Collector (HEC) with a Splunk Cloud trial account. All I do is post data to the HEC url, and It works perfectly for a local instance for an Enterprise account at http://127.0.0.1:8088/services/collector/event A solution I saw on the community forum was to disable the SSL validation. However, this isn't the best option to use in production for security reasons. Another Solution I saw was to upload certificates but this option isn't suited for a SaaS solution with many different customers. Is it possible to solve this issue in a different way? And I would also like to ask if this problem would persist for normal production client accounts and along with a generic solution for it?     Curl requests   curl https://prd-p-xxxxx.splunkcloud.com:8088/services/collector/event -H "Authorization: Splunk token" -d '{"event": "hello world"}'     Curl Response   curl: (60) SSL certificate problem: self signed certificate in certificate chain More details here: https://curl.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.     Thank you for your time and assistance in addressing these inquiries. 
Thanks for your response. I followed the documentation, but I have one question: When Tenable is running as a vulnerability management solution, which section of the documents should I follow step by... See more...
Thanks for your response. I followed the documentation, but I have one question: When Tenable is running as a vulnerability management solution, which section of the documents should I follow step by step? Could you please help me with this?
I am sending logs from application to splunk server by Splunk logging for java using Http Event Collector with log4j2 configurations. Actually logs are printed correctly in console but not getting ... See more...
I am sending logs from application to splunk server by Splunk logging for java using Http Event Collector with log4j2 configurations. Actually logs are printed correctly in console but not getting pushed to splunk server. And I am not evening getting any Error. Below is my log4j2.xml configuration file <?xml version="1.0" encoding="UTF-8"?> <Configuration status="info" name="example" packages="org.example"> <Appenders> <Console name="console" target="SYSTEM_OUT"> <PatternLayout pattern="%style{%d{IS08661}} %highlight{%-5level }[%style{%t}{bright, blue}] %style{%C{10}}{bright,yellow): %msg%n%throwable" /> </Console> <File name="MyFile" fileName="logs/app.log"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> </File> <SplunkHttp name="httpconf" url="http://localhost:8088" token="b489e167-d96d-46ec-922f-6b25fc83f199" host="localhost" index="spring_dev" source="source name" sourcetype="log4j" messageFormat="text" disableCertificateValidation="true"> <PatternLayout pattern="%m" /> </SplunkHttp> </Appenders> <Loggers> <Root level="info"> <AppenderRef ref="console" /> <AppenderRef ref="MyFile"/> <AppenderRef ref="httpconf" /> </Root> </Loggers> </Configuration>
Try without the brackets around the concatenated strings | eval start_time=exact(coalesce(start_time,'_time')), description=coalesce(description,("Unknown text for error number " . error)), error_de... See more...
Try without the brackets around the concatenated strings | eval start_time=exact(coalesce(start_time,'_time')), description=coalesce(description,("Unknown text for error number " . error)), error_description=error . "-" . description, group=isc_id . error . start_time
No. This is not the answer. This is the general idea of the answer. There are no specifics which would depend on the details which the OP hasn't provided.
Either your table is misaligned or you're trying to do something very non-obvious. I don't understand what is the relation beetween this: service errorNumber errortype Failed aaca 0 fail... See more...
Either your table is misaligned or you're trying to do something very non-obvious. I don't understand what is the relation beetween this: service errorNumber errortype Failed aaca 0 fail 8 aaca 10 pass 1000 aaca 25 fail 290 aaca 120 fail 8 aaca 80 pass 800 aaca 200 fail 400 aaca 210 pass 22 aaca 500 fail 10 And this: service errorNumber errortype Failed aaca 0 fail 2538 10 pass 25 fail 120 fail 80 pass 200 fail 210 pass 500 fail   Also remember that Splunk is not Excel so you can't merge cells
@RichfezThanks for the response, Rich. Since mvexpand/mkemv are basics when it comes to splitting a field value, I had given it a try and tried again now as well. Like you've mentioned, trying this o... See more...
@RichfezThanks for the response, Rich. Since mvexpand/mkemv are basics when it comes to splitting a field value, I had given it a try and tried again now as well. Like you've mentioned, trying this on the example data gives me 13 rows output. But once I'm there, I do not know how to pick one pair of values for a row from the expanded list of values, spread across multiple rows. Time parameter value x a x1 x c x1 x b x1 x a x2 x c x2 x b x2 x a x3 x c x3 x b x3 y d y1 y e y1 y d y2 y e y2   After this, I'm unsure how to achieve the expected output: Time parameter value x a x1 x c x2 x b x3 y d y1 y e y2  
It will work but extract with "#012 ".
Let me agree with your disagreement Do you agree with the answer though is the question - The 4 points mentioned initially to centrally get the events on a single server and monitor the same?
OK. Let me disagree here. The answer, typically for ChatGPT had completely no technical details (which is understandable since the question had almost none). And it was indeed copy-pasted. https:/... See more...
OK. Let me disagree here. The answer, typically for ChatGPT had completely no technical details (which is understandable since the question had almost none). And it was indeed copy-pasted. https://chat.openai.com/share/0291a463-54f3-4fdb-97f7-c152ed1117f3 Anyone can put their question to so-called AI service and get an "answer". The power of this forum is that people can share their experience and expertise. Anyone can use a search engine.
Hi uthornander_spl, Did you ever find the answer or solution to this?
Sounds like a feature - trellis is probably sorting the display based on the first field
  Hello Splunkers!! As per my below query I am not getting group & error_description fields from the query. Please advise what need to be modify in the last line of the query to get the results of ... See more...
  Hello Splunkers!! As per my below query I am not getting group & error_description fields from the query. Please advise what need to be modify in the last line of the query to get the results of those fields. index=2313917_2797418_scada | xpath outfield=ErrorType "//ErrorType" | search ErrorType IN("OPERATIONAL", "TECHNICAL") |xpath outfield=AreaID "//AreaID" | xpath outfield=ZoneID "//ZoneID" | xpath outfield=EquipmentID "//EquipmentID" | xpath outfield=MIS_Address "//MIS_Address" | xpath outfield=State "//State" | xpath outfield=ElementID "//ElementID" | rex field=_raw "eqtext\:Description\>(?P<description>.+)\<\/eqtext\:Description" |rename EquipmentID as equipment ZoneID as zone AreaID as area ElementID as element State as error_status MIS_Address as error | eval isc_id=area.".".zone.".".equipment | search isc_id="*" area="*" zone="*" equipment="*" | eval start_time=exact(coalesce(start_time,'_time')), _virtual_=if(isnull(virtual),"N","Y"), _cd_=replace('_cd',".*:","") | fields + _time, isc_id, area, zone, equipment, element, error, start_time error_status | sort 0 -_time _virtual_ -"_indextime" -_cd_ | dedup isc_id | fields - _virtual_, _cd_ | eval _time=start_time | lookup isc.csv id AS isc_id output statistical_subject mark_code | lookup detail_status.csv component_type_id AS statistical_subject output alarm_severity description operational_rate technical_rate | search alarm_severity="*" mark_code="*" | fillnull value=0 technical_rate operational_rate | eval start_time=exact(coalesce(start_time,'_time')), description=coalesce(description,("Unknown text for error number " . error)), error_description=((error . "-") . description), group=((isc_id . error) . start_time)
Try with coalesce | eval nameA=coalesce(nameA, nameB), addressA=coalesce(addressA, addressB), cellA=coalesce(cellA, cellB) | eventstats count by accid nameA addressA cellA | where count==1
Thank you @bowesmana . However I could not get the results with above one. Let me try to put the requirement with example again Search 1 :  index=_internal sourcetype=scheduler earliest=-1h@h lates... See more...
Thank you @bowesmana . However I could not get the results with above one. Let me try to put the requirement with example again Search 1 :  index=_internal sourcetype=scheduler earliest=-1h@h latest=now | stats latest(status) as FirstStatus by scheduled_time savedsearch_name | search NOT FirstStatus IN ("success","delegated_remote") This query will give result like below scheduled_time savedsearch_name FirstStatus 1712131500 ABC skipped Now I wanted to take the savedsearch_name ABC and the scheduled_time=1712131500 into next query and search like below index=_internal sourcetype=scheduler savedsearch_name="ABC" earliest=-1h@h latest=now | eval failed_time="1712131500" | eval range=if((failed_time>=durable_cursor AND failed_time<=scheduled_time),"COVERED","NOT COVERED") | where durable_cursor!=scheduled_time | table savedsearch_name durable_cursor scheduled_time range 04-03-2024 05:38:18.025 +0000 INFO SavedSplunker ... savedsearch_name="ABC", priority=default, status=success, durable_cursor=1712131400, scheduled_time=1712131600 Combining both into one search is fine. If not taking the values and passing into lookup and then refer later is also fine
Hello @PickleRick the other 4 points you mentioned was of no use, which is why it's not included in the answer. Couple of points -  1. The question author have mentioned - " I'm fairly new to Splun... See more...
Hello @PickleRick the other 4 points you mentioned was of no use, which is why it's not included in the answer. Couple of points -  1. The question author have mentioned - " I'm fairly new to Splunk and I'm still learning how to set things up so as many details as possible would be helpful." - Which is why the answer mentions about having everything at one place and monitor it later - which is usual practice. 2. Community answer initiate a "thread" where further discussion can be in place about what and how to achieve the solution 3. The question also mentions "I believe the process is to have the printers redirect their logs to the print server to a specific folder, then add that folder to the list of logs being reported in the Splunk forwarder. Does that sound correct?" - which is the one of the best way to monitor the logs from one place. 4. It's not copy-pasting answer, it's about taking a reference -> looking over authenticity -> Updating it as required and sharing with community. One could literary ask each and every Splunk Community question over GPT and paste the answers - but that's not being happened. We as a community wants to use new tools along with making sure whatever we are posting is authentic and actually helps the ones who posts here
#012 here is Line Feed character (\n) escaped by rsyslog (as well as #011 is an escaped \t). Question is why it's escaped. It would be easiest if the events were broken by rsyslog.
Hello! When I set up to collect Google Workspace's OAuth Token Event log using Google Workspace for Splunk, the following error occurs. The Credential is valid, so other logs (drive, login, etc.) a... See more...
Hello! When I set up to collect Google Workspace's OAuth Token Event log using Google Workspace for Splunk, the following error occurs. The Credential is valid, so other logs (drive, login, etc.) are being collected well. I would like to know the cause and solution.     error_message="'str' object has no attribute 'get'" error_type="&lt;class 'AttributeError'&gt;" error_arguments="'str' object has no attribute 'get'" error_filename="google_client.py" error_line_number="1242" input_guid="{input-guid-number}" input_name="token"   e.g.) google workspace OAuth Token Log  https://developers.google.com/admin-sdk/reports/v1/appendix/activity/token?hl=en
Migration of VMs between datastores is completely transparent for Splunk running inside that VM. So as @scelikok said - as long as you have enough performance on that datastore, you should be OK but ... See more...
Migration of VMs between datastores is completely transparent for Splunk running inside that VM. So as @scelikok said - as long as you have enough performance on that datastore, you should be OK but the process itself is something that your virtualization admin should handle and it's out of scope of this forum.