All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ideas votes are a precious resource. As @isoutamo suggested, try bugging the PM periodically through Slack. Hopefully, they'll pick it up without the minimum number of required votes.
Hi @jasuchung, You can take advantage of Simple XML's automatic positioning of multiple visualizations within a panel for automatic alignment and adjust sizes as needed: <row> <panel> <table/... See more...
Hi @jasuchung, You can take advantage of Simple XML's automatic positioning of multiple visualizations within a panel for automatic alignment and adjust sizes as needed: <row> <panel> <table/> </panel> <panel> <table/> <table/> </panel> </row> <dashboard version="1.1" theme="light"> <label>panel_layout</label> <row depends="$alwaysHideCSS$"> <html> <style> #table_ref_base { height: 800px !important } #table_ref_red { height: 400px !important } #table_ref_org { height: 400px !important } </style> </html> </row> <row> <panel> <table id="table_ref_base"> <title>table_ref_base</title> <search> <query>| makeresults</query> <earliest>0</earliest> <latest>now</latest> </search> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <table id="table_ref_red"> <title>table_ref_red</title> <search> <query>| makeresults count=15</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="refresh.display">progressbar</option> </table> <table id="table_ref_org"> <title>table_ref_org</title> <search> <query>| makeresults</query> <earliest>0</earliest> <latest>now</latest> </search> <option name="refresh.display">progressbar</option> </table> </panel> </row> </dashboard> Whitespace may still be visible depending on the number of results in the table; however, the tables on the right should be divided evenly within the available vertical space.
Small correction: NO_BINARY_CHECK = true (If I edit my original answer, the formatting will be mangled.)
My previous answer used a single source example, but you can modify unarchive_cmd settings per-source as needed.
Hi @abi2023, If you need to mask data pre-transit for whatever reason and the force_local_processing setting doesn't meet your requirements, you can use the unarchive_cmd props.conf setting to strea... See more...
Hi @abi2023, If you need to mask data pre-transit for whatever reason and the force_local_processing setting doesn't meet your requirements, you can use the unarchive_cmd props.conf setting to stream inputs through Perl, sed, or any command or script that reads input from stdin and writes output to stdout. For example, to mask strings that might be IPv4 addresses in a log file using Perl: /tmp/foo.log This is 1.2.3.4. Uh oh, 5.6.7.8 here. Definitely not an IP address: a.1.b.4. 512.0.1.2 isn't an IP address. Oops. inputs.conf [monitor:///tmp/foo.log] sourcetype = foo props.conf [source::/tmp/foo.log] unarchive_cmd = perl -pe 's/[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}/*.*.*.*/' sourcetype = preprocess-foo NO_BINARY_CHECK = tru [preprocess-foo] invalid_cause = archive is_valid = False LEARN_MODEL = false [foo] DATETIME_CONFIG = NONE SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+) The transmitted events will be masked before they're sent to the receiver: Regular expressions that work with SEDCMD should work with Perl without modifications. The unarchive_cmd setting is a flexible alternative to scripted and modular inputs. The sources do not have to be archive files. As others have noted, you can deploy different props.conf configurations to different forwarders. Your props.conf settings for line breaks, timestamp extraction, etc., should be deployed to the next downstream instance of Splunk Enterprise (heavy forwarder or indexer) or Splunk Cloud.
Hello, To check version of all apps and add-ons installed on Splunk via CLI? ./splunk list app && ./splunk list add-ons | sort AND try this one in GUI in Splunk search query and run this query... See more...
Hello, To check version of all apps and add-ons installed on Splunk via CLI? ./splunk list app && ./splunk list add-ons | sort AND try this one in GUI in Splunk search query and run this query: | rest /services/apps/local splunk_server=local | table title version
query 1: |mstats sum(transaction) as Total sum(success) as Success where index=metric-index transaction IN(transaction1, transaction2, transaction3) by service transaction |eval SuccessPerct=round((... See more...
query 1: |mstats sum(transaction) as Total sum(success) as Success where index=metric-index transaction IN(transaction1, transaction2, transaction3) by service transaction |eval SuccessPerct=round(((Success/Total)*100),2) |xyseries service transaction Total Success SuccessPerct |table service "Success: transaction1" "SuccessPerct: transaction1" "SuccessPerct: transaction2" "Total: transaction2" "Success: transaction2" |join service [|mstats sum(error-count) as Error where index=metric-index by service errortype |append [|search index=app-index sourcetype=appl-logs (TERM(POST) OR TERM(GET) OR TERM(DELETE) OR TERM(PATCH)) OR errorNumber!=0 appls=et |lookup app-error.csv code as errorNumber output type as errortype |stats count as app.error count by appls errortype |rename appls as service error-count as Error] |xyseries service errortype Error |rename wvv as WVVErrors xxf as nonerrors] |addtotals "Success: transaction1" WVVErrors nonerrors fieldname="Total: transaction1" |eval sort_service=case(service="serv1",1,service="serv2",2,service="serv3",3,service="serv4",4,service="serv5",5,service="serv6",6,service="serv7",7,service="serv8",8,service="serv9",9,service="serv10",10) |sort + sort_service |table service "Success: transaction1" "SuccessPerct: transaction2" WVVErrors nonerrors |fillnull value=0   query1 OUTPUT: service Success: transaction1 SuccessPerct: transaction2 WVVErrors  nonerrors serv1 345678.000000 12.33 7.000000 110.000000 serv2 345213.000000 22.34 8777.000000 0 serv3 1269.000000 12.45 7768.000000 563 serv4 34567.000000 11.56 124447.000000 0 serv5 23456.000000 67.55 10.000000 067 serv6 67778.000000 89.55 15.000000 32 serv7 34421.000000 89.00 17.000000 56 serv8 239078.000000 53.98 37.000000 67.0000000 serv9 769.000000 09.54 87.000000  8.00000 serv10 3467678.000000 87.99 22.000000 27.000000 serv11 285678.000000 56.44 1123.000000 90.00000 serv12 5123.000000 89.66 34557.000000 34 serv13 678.000000 90.54 37.000000 56 serv14 345234678.000000 89.22 897.000000 33 serv15 12412.33678.000000 45.29 11237.000000 23.000000 query2: |mstats sum(error-count) as Error where index=metric-index by service errorNumber errortype query2: output: service errorNumber errortype Error serv1 0 wvv 7.000000 serv1 22 wvv 8777.000000 serv1 22 wvv 7768.000000 serv1 45 wvv 124447.000000 serv2 0 xxf 10.000000 serv2 22 xxf 15.000000 serv2 22 xxf 17.000000 serv2 45 xxf 37.000000 serv3 0 wvv 87.000000 serv3 22 wvv 22.000000 serv3 22 wvv 1123.000000 serv3 45 wvv 34557.000000 serv4 0 xxf 37.000000 serv4 26 xxf 897.000000 serv4 22 xxf 11237.000000 serv4 40 xxf 7768.000000 serv5 25 wvv 124447.000000 serv5 28 wvv 10.000000 serv5 1000 wvv 15.000000 serv5 10 wvv 17.000000 serv6 22 xxf 37.000000 serv6 34 xxf 87.000000 serv6 88 xxf 22.000000 serv6 10 xxf 45.000000   we want to combine query 1 and query2 and want to get the both outputs in one table.
Here on StackOverFlow  I share a script to change log4j2.xml and proxy.py automatically. Besides the script, I put a tip on how to do this inside a Dockerfile. Basically, I replace the hardcoded va... See more...
Here on StackOverFlow  I share a script to change log4j2.xml and proxy.py automatically. Besides the script, I put a tip on how to do this inside a Dockerfile. Basically, I replace the hardcoded values on log4j2.xml and proxy.py by environment variables that allow you to change them more easily on deploy. log4j2.xml with environment variable proxy.py with environment variable def configure_proxy_logger(debug): logger = logging.getLogger('appdynamics.proxy') level = os.getenv("APP_APPD_WATCHDOG_LOG_LEVEL", "info").upper() pass def configure_watchdog_logger(debug): logger = logging.getLogger('appdynamics.proxy') level = os.getenv("APP_APPD_WATCHDOG_LOG_LEVEL", "info").upper() pass
I came across a workaround that was suggested in an unofficial capacity by someone at AppDynamics during their local lab explorations. While this solution isn't officially supported by AppDynamics, i... See more...
I came across a workaround that was suggested in an unofficial capacity by someone at AppDynamics during their local lab explorations. While this solution isn't officially supported by AppDynamics, it has proven to be effective for adjusting the log levels for both the Proxy and the Watchdog components within my AppDynamics setup. I'd like to share the steps involved, but please proceed with caution and understand that this is not a sanctioned solution. I recommend changing only the log4j2.xml file, because the proxy messages look like are responsible for almost 99% of the log messages. Here's a summary of the steps: Proxy Log Level: The log4j2.xml file controls this. You can find it within the appdynamics_bindeps module. For example, in my WSL setup, it's located at /home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml. In the Docker image python:3.9, the path is /usr/local/lib/python3.9/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml. Modify the seven log level itens <AsyncLogger> within the <Loggers> section to one of the following: debug, info, warn, error, or fatal. Watch Dog Log Level: This can be adjusted in the proxy.py file found within the appdynamics Python module. For example, in my WSL setup, it's located at /home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics/scripts/pyagent/commands/proxy.py. In the Docker image python:3.9, the path is /usr/local/lib/python3.9/site-packages/appdynamics/scripts/pyagent/commands/proxy.py. You will need to hardcode the log level in the configure_proxy_logger and configure_watchdog_logger functions by changing the level variable. My versions $ pip freeze | grep appdynamics appdynamics==24.2.0.6567 appdynamics-bindeps-linux-x64==24.2.0 appdynamics-proxysupport-linux-x64==11.68.3 log4j2.xml <Loggers> <AsyncLogger name="com.singularity" level="info" additivity="false"> <AppenderRef ref="Default"/> <AppenderRef ref="RESTAppender"/> <AppenderRef ref="Console"/> </AsyncLogger> <AsyncLogger name="com.singularity.dynamicservice" level="info" additivity="false"> <AppenderRef ref="DynamicServiceAppender"/> </AsyncLogger> <AsyncLogger name="com.singularity.ee.service.datapipeline" level="info" additivity="false"> <AppenderRef ref="DataPipelineAppender"/> <AppenderRef ref="Console"/> </AsyncLogger> <AsyncLogger name="com.singularity.datapipeline" level="info" additivity="false"> <AppenderRef ref="DataPipelineAppender"/> <AppenderRef ref="Console"/> </AsyncLogger> <AsyncLogger name="com.singularity.BCTLogger" level="info" additivity="false" > <AppenderRef ref="BCTAppender"/> </AsyncLogger> <AsyncLogger name ="com.singularity.api" level="info" additivity="false"> <AppenderRef ref="APIAppender"/> </AsyncLogger> <AsyncLogger name="com.singularity.segment.TxnTracer" level="info" additivity="false"> <AppenderRef ref="Default"/> <AppenderRef ref="Console"/> </AsyncLogger> <Root level="error"> <AppenderRef ref="Console"/> <AppenderRef ref="Default"/> </Root> </Loggers> proxy.py def configure_proxy_logger(debug): logger = logging.getLogger('appdynamics.proxy') level = logging.DEBUG if debug else logging.INFO pass def configure_watchdog_logger(debug): logger = logging.getLogger('appdynamics.proxy') level = logging.DEBUG if debug else logging.INFO pass Suggestion Change the hardcoded variables by environment variables: log4j2.xml file, change from [level="info"] to proxy.py file, change from [logging.DEBUG if debug else logging.INFO] to [os.getenv("APP_APPD_WATCHDOG_LOG_LEVEL", "info").upper()] Warning Please note, these paths and methods may vary based on your AppDynamics version and environment setup. Always backup files before making changes and be aware that updates to AppDynamics may overwrite your customizations. I hope this helps!
Explain Splunk Enterprise Event Collector, Processor and Console architecture.
Hi @yuanliu , as suggested, I tried below query, but i am not getting expected output. I mean i am not getting previous hour data under delta row.  all values are 0. please see my output.   |mstats... See more...
Hi @yuanliu , as suggested, I tried below query, but i am not getting expected output. I mean i am not getting previous hour data under delta row.  all values are 0. please see my output.   |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application |streamstats window=2 global=false range(Trans) as delta max(Trans) as Trans_max max(_time) as _time by application |sort application _time |eval delta = if(Trans_max == Trans, delta, "-" . delta) |eval pct_delta = delta / Trans * 100 |fields - Trans_max     Output: _time application Trans delta_Trans pct_delta_Trans 2022-01-22 02:00 app1 3456.000000 0.000000 0.000000 2022-01-22 02:00 app2 5632.000000 0.000000  0.000000 2022-01-22 02:00 app3 5643.000000 0.000000 0.000000 2022-01-22 02:00 app4 16543.00000 0.000000  0.000000
"><h1>h1</h1>
That's not the prettiest data. In order to make some sense of this data you have to first parse the outer array | spath {} Then you have to split the resulting multivalued field | mvexpand {} No... See more...
That's not the prettiest data. In order to make some sense of this data you have to first parse the outer array | spath {} Then you have to split the resulting multivalued field | mvexpand {} Now you can parse each of those structures separately | spath input={}  
Please help with splunk query to get pass and fail count in table format from below jsonarray | Group   | Pass | Fail | | Group1 | 239    | 6     | | Group2 | 746    | 14    | | Group3 | 760    |... See more...
Please help with splunk query to get pass and fail count in table format from below jsonarray | Group   | Pass | Fail | | Group1 | 239    | 6     | | Group2 | 746    | 14    | | Group3 | 760    | 10     | [ { "Group": 1, "Pass": 239, "Fail": 6 }, { "Group": 2, "Pass": 746, "Fail": 14 }, { "Group": 3, "Pass": 760, "Fail": 10 } ]  
Hello there, we're in process of deploying Splunk Cloud. We have installed the Microsoft Office 365 App for splunk along with all the required add-ons. The app is working as intended except that we'r... See more...
Hello there, we're in process of deploying Splunk Cloud. We have installed the Microsoft Office 365 App for splunk along with all the required add-ons. The app is working as intended except that we're not getting any Message Trace data. We followed the instructions to properly setup the the Add-on Input and assigned the API permissions on the Azure side. For whatever reason we're still not getting any Trace Data. It looks like problem it's on the Azure side, we have assigned the appropriate API permissions as stated in the documentation. Is there anything else that needs to be setup on the Azure - or Splunk side to get Exchange Trace data?   We followed this instructions for Splunk add-on for Microsoft Office 365 integration. https://docs.splunk.com/Documentation/AddOns/released/MSO365/ConfigureappinAzureAD Any help would be highly appreciated. 
I created an application in the Entra ID (single tenant) and then created a secret.  Screenshots attached. I also gave the application Azure Event Hub data receiver access in the subscription ... See more...
I created an application in the Entra ID (single tenant) and then created a secret.  Screenshots attached. I also gave the application Azure Event Hub data receiver access in the subscription The authentication fails
Hello @sushraw  Can you please try appending below -  | makemv CmdArgAV | eval CmdArgAV = replace(CmdArgAV, "\n", ", ")   The final results based on the sample event you shared would be - | mak... See more...
Hello @sushraw  Can you please try appending below -  | makemv CmdArgAV | eval CmdArgAV = replace(CmdArgAV, "\n", ", ")   The final results based on the sample event you shared would be - | makeresults | eval _raw="Mar 26 15:37:59 <device_IP> <device_name>_Passed_Authentications 0045846127 2 0 2024-03-26 14:37:59.011 +00:00 06024423114 5202 NOTICE Device-Administration: Command Authorization succeeded, ConfigVersionId=1398, Device IP Address=<device_IP>, DestinationIPAddress=<device_IP>, DestinationPort=49, UserName=<user>, CmdSet=[ CmdAV=show CmdArgAV=running-config CmdArgAV=interface CmdArgAV=Ethernet1/19 CmdArgAV=<cr> ], Protocol=Tacacs, MatchedCommandSet=Unsafecommand, RequestLatency=10, NetworkDeviceName=<device_name>" | rex field=_raw "CmdSet=\[(?<CmdSet>[^\]]+)\]" | rex field=CmdSet max_match=0 "CmdArgAV=(?<CmdArgAV>[^\s]+)" | makemv CmdArgAV | eval CmdArgAV = replace(CmdArgAV, "\n", ", ")   Below screenshot for your reference -   If this reply helps you, Karma would be appreciated.
Step 1: Prerequisites: a. Splunk® Universal Forwarder w/Splunk_TA_nix installed b. "Package.sh" should be enabled similar to the example below Note: that the UF needs to be restarted to en... See more...
Step 1: Prerequisites: a. Splunk® Universal Forwarder w/Splunk_TA_nix installed b. "Package.sh" should be enabled similar to the example below Note: that the UF needs to be restarted to enable the input if it was previously started without the input. Step 2: Deploy the updated inputs / app If you need to deploy the app out, you'll only need to deploy it to Linux hosts. Do make sure you enable splunkd restart on your app deployment Step 3. Detect the CVE Now allow time for the data to arrive at your indexing tier and you should be able to run this search as a detection source=package sourcetype=package NAME=xz-libs VERSION IN ("5.6.0","5.6.1") Note: You may need to add index=os or index=Your_Linux_TA_Data_Index_here, but by default the data will be in index=main You'll probably want to take the search a few steps further. First thing that comes to our mind is adding a "| stats latest(_time) as latest_time by host". When you manipulate _time like that you'll notice it converts to epoch, so you'll probably want to convert it back to human readable format with "| convert ctime(latest_time)". The full search might look something like this: source=package sourcetype=package NAME=xz-libs VERSION IN ("5.6.0","5.6.1") | stats latest(_time) as latest_time by host | convert ctime(latest_time) If anyone else has anything to add, please reply or add your answer.
How to detect CVE-2024-3094 with Splunk?
@ITWhisperer  Thanks a lot!!