All Topics

Top

All Topics

I have two sourcetypes from the same index, both in JSON formatting.  One contains hosts and vulnerability scan data and the other contains hosts and host info. I ultimately want to tie the vulnerabi... See more...
I have two sourcetypes from the same index, both in JSON formatting.  One contains hosts and vulnerability scan data and the other contains hosts and host info. I ultimately want to tie the vulnerability data to the the hosts in the other sourcetype and create an outputlookup. The matching field I would like to use is IP but the field names are different in each sourcetype. Sourcetype1 has the IP field named ipv4s{} and sourcetype2's IP field is called asset.ivp4. I have tried combing them using eval and coalesce but when I do, ipv4s{} will come up as the field value and not the IPs of the two previously mentioned fields.  Here is the search I've been trying:       index=index (sourcetype=sourcetype1 OR sourcetype=sourcetype2 | eval IP=coalesce("ipv4s{}","asset.ipv4")        
Hello team. Is there an upgrade path to upgrade Splunk on my heavy forwarders? Or is it just a matter of installing the new version of the Splunk RPM? I don't see any docs in Splunk about breaks or p... See more...
Hello team. Is there an upgrade path to upgrade Splunk on my heavy forwarders? Or is it just a matter of installing the new version of the Splunk RPM? I don't see any docs in Splunk about breaks or preparations for this upgrade.
Hi Team, I am trying to search <string1> and <String2> from different lines in same log having 100 lines, if both matched i want to show in result with _time, Sring1, String2. Please assist me. S... See more...
Hi Team, I am trying to search <string1> and <String2> from different lines in same log having 100 lines, if both matched i want to show in result with _time, Sring1, String2. Please assist me. Sample log is like below ... 66 lines omitted ... Linexx Linexx ]: "<string1>" Linexx <string2>   Result should be link  _time , String1 
I have a very simple search and when I add the sort command i lose almost 90% of my actual results.      index="features" application=kokoapp type=userStats | sort feature | dedup feature | tab... See more...
I have a very simple search and when I add the sort command i lose almost 90% of my actual results.      index="features" application=kokoapp type=userStats | sort feature | dedup feature | table feature       Without the sort command I get 35 results and with it included i only get 4 results. Is there something I am missing?
I can set up a query with a simple trendinterval single pane value comparing the same time period for 24 hours prior.  Add that panel to new clean DashBoard studio and it displays different data for ... See more...
I can set up a query with a simple trendinterval single pane value comparing the same time period for 24 hours prior.  Add that panel to new clean DashBoard studio and it displays different data for the trend.  Add the same exact query to a classic dash and all works as intended. Dash Studio "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "off", "trendDisplay": "percent", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "trendInterval": "-24h", "unitPosition": "after", "shouldUseThousandSeparators": true, "majorColor": "> majorValue | rangeValue(majorColorEditorConfig)", "trendColor": "> trendValue | rangeValue(trendColorEditorConfig)", "majorFontSize": 100, "trendFontSize": 20 Classic "type": "splunk.singlevalue", "options": { "colorMode": "block", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "off", "trendDisplay": "percent", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "trendInterval": "-24h", "unitPosition": "after", "shouldUseThousandSeparators": true, "majorFontSize": 100, "majorColor": "> majorValue | rangeValue(majorColorEditorConfig)", "trendColor": "> trendValue | rangeValue(trendColorEditorConfig)", "trendFontSize": 20
We have some logs coming in the following format.    ERROR | 2023-03-16 01:27:14 EDT | field1=field1_value | field2=field2_value | field3=field3_value | field4=field4_value | field5=field5_value ... See more...
We have some logs coming in the following format.    ERROR | 2023-03-16 01:27:14 EDT | field1=field1_value | field2=field2_value | field3=field3_value | field4=field4_value | field5=field5_value | field6=field6_value | field7={} | message=Message String with spaces. java.stacktrace.Exception: Exception Details. at ... at ... at ... at ...     Splunk's default extraction works well in getting all key=value pairs, except for the field "message" where only first word before the space is extracted  and drops the rest. To get around this, I used the following inline regex.   | rex field=_raw "message=(?<message>.+)"   This works well in search and extracts the entire message string right until the newline. But when I used the same regex in the configuration file, it seems to be ignoring the newline and continues to match everything else all the way until end of the event. Have tried using EXTRACT as well as REPORT(using transforms.conf) but same result. Do props.conf/transforms.conf interpret regex differently? To summarize,  default Splunk extraction,   message = Message   with inline rex   message = Message String with spaces.   with regex in props/transforms,    message = Message String with spaces. java.stacktrace.Exception: Exception Details. at ... at ... at ... at ...     Any suggestions on how to use this regex from configuration?  Thank you,
I have a single-value panel. Is it possible to display another panel only after clicking on the single-value one?
I am trying to create drilldown from a pie chart to a table on same dashboard.  Is it possible ?    Thanks
I have lookup table with a DNS blocklist. What query can I use to search for events with any of the blocklisted domains. I had received advice to create a csv file with two columns: "Domain" and "sus... See more...
I have lookup table with a DNS blocklist. What query can I use to search for events with any of the blocklisted domains. I had received advice to create a csv file with two columns: "Domain" and "suspicious" which is set to 1 for all the domains. Then I would search for the dns sourcetype and suspicious=1. This did not work.
So I am troubleshooting missing data from hosts, I have the index name that is missing the data, and so I would like to track down the suspect forwarder that is not sending the data to the index.  I... See more...
So I am troubleshooting missing data from hosts, I have the index name that is missing the data, and so I would like to track down the suspect forwarder that is not sending the data to the index.  I am not interested in the indexer server.   
My GoogleFu is failing me. There's a lot of btool tutorials, but I can't find this solution... I'm on a Windows 10 system, trying to debug the effective config of it's universal client. This same m... See more...
My GoogleFu is failing me. There's a lot of btool tutorials, but I can't find this solution... I'm on a Windows 10 system, trying to debug the effective config of it's universal client. This same message with both command prompt (cmd.exe) and with PowerShell. I think it's apparent that it's a variable setting, somewhere, but where?
I have a search in Splunk that returns events for failed logins. I want to be able to check for a successful authentication from a user and an IP 10 days prior to the failed login. Is this possible v... See more...
I have a search in Splunk that returns events for failed logins. I want to be able to check for a successful authentication from a user and an IP 10 days prior to the failed login. Is this possible via a query? index=logins | where AuthenticationResults="failed" | sort 0 - _time | eval successtime = if(AuthenticationResult=="success", _time, null())
We are at AWS 6.3.0 version still we are seeing the below errors in splunk_ta_aws_aws_sqs_based_s3_ess-p-sys-awsconfig.log log file. 2023-03-16 02:55:41,201 level=ERROR pid=72620 tid=Thread-8 lo... See more...
We are at AWS 6.3.0 version still we are seeing the below errors in splunk_ta_aws_aws_sqs_based_s3_ess-p-sys-awsconfig.log log file. 2023-03-16 02:55:41,201 level=ERROR pid=72620 tid=Thread-8 logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=handler.py:_ingest_file:514 | datainput="ess-p-sys-awsconfig" start_time=1678849700, message_id="39693d02-55gg -4e62-9895-9796fa51ed2f" created=1678902941.093989 ttl=300 job_id=95c972ab-5316-4bda-bfaf-a337ad5effbe | message="Failed to ingest file." uri="s3://essaws-p-system/aws/config/AWSLogs/385473250182/Config/ap-northeast-1/2023/3/15/OversizedChangeNotification/AWS::EC2::SecurityGroup/sg-03e7b8de81918c141/385473250182_Config_ap-northeast-1_ChangeNotification_AWS::EC2::SecurityGroup_sg-03e7b8de81918c141_20230315T171015Z_1678900215945.json.gz" 2023-03-16 02:55:41,201 level=CRITICAL pid=72620 tid=Thread-8 logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=handler.py:_process:442 | datainput="ess-p-sys-awsconfig" start_time=1678849700, message_id="39693d02-55gg-4e62-9895-9796fa51ed2f" created=1678902941.093989 ttl=300 job_id=95c972ab-5316-4bda-fgff-ad5effbe | message="An error occurred while processing the message." Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 431, in _process self._parse_csv_with_delimiter, File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 478, in _ingest_file for records, metadata in self._decode(fileobj, source): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/common/decoder.py", line 95, in __call__ records = document["configurationItems"] KeyError: 'configurationItems'
Hi, I seem to be having a mental block which maybe someone can help with. I have an input dropdown which runs a query to populate values in the dropdown, depending on the value selected I want to ... See more...
Hi, I seem to be having a mental block which maybe someone can help with. I have an input dropdown which runs a query to populate values in the dropdown, depending on the value selected I want to set two more token. The condition match should be on the first 6 characters of the selected value, so if aaaaaa* set two new tokens and if aaaaab* set a different two. All tokens are then used in panels on the dashboard and should refresh each time the value in the dropdown is changed. sample xml below that will hopefully give the idea. <input type="dropdown" token="iptoken" searchWhenChanged="true"> <label>Apply Engine</label> <fieldForLabel>value</fieldForLabel> <fieldForValue>value</fieldForValue> <search> <query>index=ti-u_* sourcetype=$iptoken$ value=aaaaa** | table value |dedup value |sort value</query> <earliest>@y</earliest> <latest>now</latest> </search> <<change> <condition match="$iptoken$==&quot;aaaaaa*&quot;"> <set token="newtoken1">newvalue1</set> <set token="newtoken2">newvalue2</set> </condition> <condition match="$iptoken$==&quot;aaaaab*&quot;"> <set token="newtoken1">newvalue1</set> <set token="newtoken2">newvalue2</set> </condition> </change> </input> thanks in advance
Is there a way I can reduce cost on Splunk by using AWS security lake add on? 
Is the software compatible with large Citrix VAD environments, MCS provisioned Windows Servers, 1 Image (Frontend Servers)? For example 9000 concurrent users and 9000 Java processes?  How do you use... See more...
Is the software compatible with large Citrix VAD environments, MCS provisioned Windows Servers, 1 Image (Frontend Servers)? For example 9000 concurrent users and 9000 Java processes?  How do you use environment variables in the config files? agent.properties of the java agent, analytics-agent.properties of the analytics agent ? Setting environment variables as APPDYNAMICS_AGENT_UNIQUE_HOST_ID via GPO is not possible since the hostname is changing for each machine. Machines can be dynamically removed and new ones added. If I add the COMPUTERNAME  systemvariable to the APPDYNAMICS_AGENT_UNIQUE_HOST_ID  the service won't be able to see it after reboot, because it is not visible to the process, due to windows changing the variable while the service is running. It can be a timing issue. Restarting the service is no usefull workaround. I need the configs to use environment variables. No configuration is done per Server. All needs to be done globally. Setting APPDYNAMICS_AGENT_NODE_NAME, APPDYNAMICS_AGENT_UNIQUE_HOST_ID, ad.agent.name needs to be dynamic Setting APPDYNAMICS_AGENT_BASE_DIR, ad.dw.log.path for log paths needs to be dynamic Setting those globally but with dynamic values Using scripts to modify startups is no usefull way. Javawebstart is used for the application, so JAVA_TOOL_OPTIONS is set, since this is the only working way to start the java agent with JNLP. Instrumenting Java Web Start Applications - AppDynamics Community Note: javaws doens't know the -J-javaagent parameter (tested with Oracle Java and OpenWebstart) Note: A service can only access UNC paths Services and Redirected Drives - Win32 apps | Microsoft Learn
Hi All We have installed Installed SmartStore in our environment . Need your help in validating SmartStore Features. Have followed the below links and able to verify  the connectivity and remote ... See more...
Hi All We have installed Installed SmartStore in our environment . Need your help in validating SmartStore Features. Have followed the below links and able to verify  the connectivity and remote store.   basic testing is fine. [SmartStore] How to verify splunk indexer connecti... - Splunk Community and Troubleshoot SmartStore - Splunk Documentation BUT, Need to know if there are any SPL Query or steps to show that after installing smartstore it have improved the performance. Keen on steps or data that will give as evidence that will justify Smartstore features    Thanks and Regards  
Hi, I'm tring to change the sourcetype of all data of a specific source in props.conf [source::/var/log/messages] TRANSFORMS-change_sourcetype = syslog_sourcetype_change in transform.conf [s... See more...
Hi, I'm tring to change the sourcetype of all data of a specific source in props.conf [source::/var/log/messages] TRANSFORMS-change_sourcetype = syslog_sourcetype_change in transform.conf [syslog_sourcetype_change] SOURCE_KEY = MetaData:Sourcetype REGEX = .* FORMAT = sourcetype::syslog:nix DEST_KEY = MetaData:Sourcetype I checked the running config via btool and the stanzas are correctly configured on my heavy forwarder but it not works, the logs remain into syslog sourcetype Thanks in advance
Hi Splunk Experts I have a set of set of users whom I just want them to allow only run ad-hoc searches. I don't want them to creating dashboard, reports and alerts.  How it can be achievable ? ... See more...
Hi Splunk Experts I have a set of set of users whom I just want them to allow only run ad-hoc searches. I don't want them to creating dashboard, reports and alerts.  How it can be achievable ? Any pointers to document will be helpful.  Thanks in advance Santosh 
From the events below, wanted to extract fields as per my requirements. Please check events are not listed Event 1  gcse1 DB-OK-lpdecpdb0001089-deusw1pgecpsd000083 nemoe1 DB-OK-lpdecpdb0000922-hvi... See more...
From the events below, wanted to extract fields as per my requirements. Please check events are not listed Event 1  gcse1 DB-OK-lpdecpdb0001089-deusw1pgecpsd000083 nemoe1 DB-OK-lpdecpdb0000922-hvidlnssdb01 gcodse1 DB-OK-lpdecpdb0002495-deusw1pgecpsd000198 edmse1 DB-OK-lpdecpdb0002521 vPaymente1 DB-OK-lpdecpdb0001121-deusw1pgecpsd000094 cadence1 DB-OK-lpdecpdb0001269-deusw1pgecpsd000111 Event 2 nemoe1 DB-OK-lpdecpdb0000922-hvidlnssdb01 gcodse1 DB-OK-lpdecpdb0002495-deusw1pgecpsd000198 gcse1 DB-OK-lpdecpdb0001089-deusw1pgecpsd000083 cadence1 DB-OK-lpdecpdb0001269-deusw1pgecpsd000111 edmse1 DB-OK-lpdecpdb0002521 vPaymente1 DB-OK-lpdecpdb0001121-deusw1pgecpsd000094 Event 3 nemoe1 DB-OK-lpdecpdb0000922-hvidlnssdb01 gcodse1 DB-OK-lpdecpdb0002495-deusw1pgecpsd000198 gcse1 DB-OK-lpdecpdb0001089-deusw1pgecpsd000083 cadence1 DB-OK-lpdecpdb0001269-deusw1pgecpsd000111 edmse1 DB-OK-lpdecpdb0002521 vPaymente1 DB-OK-lpdecpdb0001121-deusw1pgecpsd000094 First column from the above event lines must be extracted as "ClusterName" gcse1 nemoe1 gcodse1 edmse1 vPaymente1 caddence1   Second Column from the above event lines must be extracted as "DB_Status" DB-OK   Third Column from the above event lines must be extracted as "PostgresDB_VIPName" lpdecpdb0001089 lpdecpdb0000922 lpdecpdb0002495 lpdecpdb0002521 lpdecpdb0001121 lpdecpdb0001269