All Topics

Top

All Topics

Hi all! Recently there has been a need to implement a centralized Splunk setup on Linux machines. I managed to do this without using ansible, starting from the script by lguinn2 : https://community.... See more...
Hi all! Recently there has been a need to implement a centralized Splunk setup on Linux machines. I managed to do this without using ansible, starting from the script by lguinn2 : https://community.splunk.com/t5/Getting-Data-In/Simple-installation-script-for-Universal-Forwarder/m-p/21517. Today I want to share with everyone, of course, if you have any comments or improvements, please write! I got two slightly different scripts, 1 for CentOS and SuSe, 2 for Ubuntu and Debian. Why, because different installation packages were used. Yes, yes, it was possible to do something differently, I just had to make it as quickly as possible, and the level of knowledge of Linux is also hoarse)  Well, this post is for people like me)) Note. The script has been tested on an Ubuntu distribution, so it is recommended that you work on an Ubuntu machine. Successful completion of work requires SSH access to target devices, the ability to execute ssh and sshpass commands. All target machines must have an identical account with the same password, and this account must be as superuser (with the ability to run commands via sudo). MyForwarders and MyForwarders_U are simple text files for storing IP addresses of target machines: I think for the rest everything is clear from the description, even if there are questions during the time, you will understand everything! 1. #!/bin/bash # Credentials of the user who will connect to the target host and run Splunk. read -p "Enter SSH user name: " username echo -n "Enter SSH user password: " stty -echo read password stty echo echo INSTALLED=False # Logging file for Splunk status STATUS_LOG="/home/zhanali/splunk_status.txt" # File with machine's IPs HOSTS_FILE="/home/zhanali/MyForwarders" # Installation file location INSTALL_FILE="/home/zhanali/splunkforwarder-9.1.0.1-77f73c9edb85.x86_64.rpm" PREPARE_COMMANDS=" echo $password | sudo -S -k mkdir /opt/splunkforwarder 2>/dev/null echo $password | sudo -S -k chown -R splunk:splunk /opt/splunkforwarder 2>/dev/null " INSTALL_COMMANDS=" echo $password | sudo -S -k chmod 644 /opt/splunkforwarder/splunkforwarder-9.1.0.1-77f73c9edb85.x86_64.rpm 2>/dev/null echo $password | sudo -S -k rpm -i /opt/splunkforwarder/splunkforwarder-9.1.0.1-77f73c9edb85.x86_64.rpm 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk start --accept-license --answer-yes --no-prompt --seed-passwd '!@#qweasdZXC' 2>/dev/null echo 'Please wait 10 second....' sleep 10 echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk stop 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk disable boot-start 2>/dev/null echo $password | sudo -S -k chown -R splunk:splunk /opt/splunkforwarder 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk enable boot-start -user $username 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk start 2>/dev/null echo $password | sudo -S -k mkdir /home/$username/.splunk 2>/dev/null echo $password | sudo -S -k chmod 777 -R /home/$username/.splunk 2>/dev/null echo $password | sudo -S -k sudo -u $username /opt/splunkforwarder/bin/splunk add forward-server 172.16.30.104:9997 -auth 'admin':'!@#qweasdZXC' 2>/dev/null echo $password | sudo -S -k sudo -u $username /opt/splunkforwarder/bin/splunk set deploy-poll 172.16.30.104:8089 -auth 'admin':'!@#qweasdZXC' 2>/dev/null " echo "In 5 seconds, will run the following script on each remote host:" echo sleep 5 echo "Reading host IPs from $HOSTS_FILE" echo echo "Starting." for DST in `cat "$HOSTS_FILE"`; do if [ -z "$DST" ]; then continue; fi echo "---------------------------------" | tee -a $STATUS_LOG echo "Starting work with $DST" | tee -a $STATUS_LOG sshpass -p $password ssh -q $username@$DST [[ -f /opt/splunkforwarder/bin/splunk ]] && INSTALLED=True || INSTALLED=False if [ "$INSTALLED" = "True" ]; then echo "Splunk UF is already installed" | tee -a $STATUS_LOG version=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk version | grep 'Splunk Universal Forwarder'" 2>/dev/null) echo "Splunk UF version: $version" | tee -a $STATUS_LOG status=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk status | grep 'splunkd is '" 2>/dev/null) echo "Splunk UF status: $status" | tee -a $STATUS_LOG dep=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/deploymentclient.conf | grep '172.16.30.104:8089'" 2>/dev/null) fwd=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/outputs.conf | grep '172.16.30.104:9997'" 2>/dev/null) if [ -z "$dep" ]; then echo "Deployment server is not configured" | tee -a $STATUS_LOG else echo "Deployment server is configured" | tee -a $STATUS_LOG fi if [ -z "$fwd" ]; then echo "Forward server is not configured" | tee -a $STATUS_LOG else echo "Forward server is configured" | tee -a $STATUS_LOG fi INSTALLED=False else echo "Splunk UF is not installed to host $DST" | tee -a $STATUS_LOG echo "Installing..." | tee -a $STATUS_LOG sshpass -p $password ssh $username@$DST "$PREPARE_COMMANDS" sshpass -p $password scp $INSTALL_FILE $username@$DST:/opt/splunkforwarder sshpass -p $password ssh $username@$DST "$INSTALL_COMMANDS" echo "Installation is done" | tee -a $STATUS_LOG echo "Checking..." | tee -a $STATUS_LOG status=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk status | grep 'splunkd is '" 2>/dev/null) echo "Splunk UF status: $status" | tee -a $STATUS_LOG dep=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/deploymentclient.conf | grep '172.16.30.104:8089'" 2>/dev/null) fwd=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/outputs.conf | grep '172.16.30.104:9997'" 2>/dev/null) if [ -z "$dep" ]; then echo "Deployment server is not configured" | tee -a $STATUS_LOG else echo "Deployment server is configured" | tee -a $STATUS_LOG fi if [ -z "$fwd" ]; then echo "Forward server is not configured" | tee -a $STATUS_LOG else echo "Forward server is configured" | tee -a $STATUS_LOG fi fi echo "---------------------------------" | tee -a $STATUS_LOG done echo "Done" And 2. #!/bin/bash # Credentials of the user who will connect to the target host and run Splunk. read -p "Enter SSH user name: " username echo -n "Enter SSH user password: " stty -echo read password stty echo echo INSTALLED=False # Logging file for Splunk status STATUS_LOG="/home/zhanali/splunk_status.txt" # File with machine's IPs HOSTS_FILE="/home/zhanali/MyForwarders_U" # Installation file location INSTALL_FILE="/home/zhanali/splunkforwarder-9.1.0.1-77f73c9edb85-linux-2.6-amd64.deb" PREPARE_COMMANDS=" echo $password | sudo -S -k mkdir /opt/splunkforwarder 2>/dev/null echo $password | sudo -S -k chown -R splunk:splunk /opt/splunkforwarder 2>/dev/null " INSTALL_COMMANDS=" echo $password | sudo -S -k dpkg -i /opt/splunkforwarder/splunkforwarder-9.1.0.1-77f73c9edb85-linux-2.6-amd64.deb 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk start --accept-license --answer-yes --no-prompt --seed-passwd '!@#qweasdZXC' 2>/dev/null echo 'Please wait 10 second....' sleep 10 echo $password | sudo -S -k chown -R splunk:splunk /opt/splunkforwarder 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk stop 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk enable boot-start -user $username 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk start 2>/dev/null echo 'Please wait 5 second....' sleep 5 echo $password | sudo -S -k sudo -u $username /opt/splunkforwarder/bin/splunk add forward-server 172.16.30.104:9997 -auth 'admin':'!@#qweasdZXC' 2>/dev/null echo $password | sudo -S -k sudo -u $username /opt/splunkforwarder/bin/splunk set deploy-poll 172.16.30.104:8089 -auth 'admin':'!@#qweasdZXC' 2>/dev/null " echo "In 5 seconds, will run the following script on each remote host:" echo sleep 5 echo "Reading host IPs from $HOSTS_FILE" echo echo "Starting." for DST in `cat "$HOSTS_FILE"`; do if [ -z "$DST" ]; then continue; fi echo "---------------------------------" | tee -a $STATUS_LOG echo "Starting work with $DST" | tee -a $STATUS_LOG sshpass -p $password ssh -q $username@$DST [[ -f /opt/splunkforwarder/bin/splunk ]] && INSTALLED=True || INSTALLED=False if [ "$INSTALLED" = "True" ]; then echo "Splunk UF is already installed" | tee -a $STATUS_LOG version=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk version | grep 'Splunk Universal Forwarder'" 2>/dev/null) echo "Splunk UF version: $version" | tee -a $STATUS_LOG status=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk status | grep 'splunkd is '" 2>/dev/null) echo "Splunk UF status: $status" | tee -a $STATUS_LOG dep=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/deploymentclient.conf | grep '172.16.30.104:8089'" 2>/dev/null) fwd=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/outputs.conf | grep '172.16.30.104:9997'" 2>/dev/null) if [ -z "$dep" ]; then echo "Deployment server is not configured" | tee -a $STATUS_LOG else echo "Deployment server is configured" | tee -a $STATUS_LOG fi if [ -z "$fwd" ]; then echo "Forward server is not configured" | tee -a $STATUS_LOG else echo "Forward server is configured" | tee -a $STATUS_LOG fi INSTALLED=False else echo "Splunk UF is not installed to host $DST" | tee -a $STATUS_LOG echo "Installing..." | tee -a $STATUS_LOG sshpass -p $password ssh $username@$DST "$PREPARE_COMMANDS" sshpass -p $password scp $INSTALL_FILE $username@$DST:/opt/splunkforwarder sshpass -p $password ssh $username@$DST "$INSTALL_COMMANDS" echo "Installation is done" | tee -a $STATUS_LOG echo "Checking..." | tee -a $STATUS_LOG status=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk status | grep 'splunkd is '" 2>/dev/null) echo "Splunk UF status: $status" | tee -a $STATUS_LOG dep=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/deploymentclient.conf | grep '172.16.30.104:8089'" 2>/dev/null) fwd=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/outputs.conf | grep '172.16.30.104:9997'" 2>/dev/null) if [ -z "$dep" ]; then echo "Deployment server is not configured" | tee -a $STATUS_LOG else echo "Deployment server is configured" | tee -a $STATUS_LOG fi if [ -z "$fwd" ]; then echo "Forward server is not configured" | tee -a $STATUS_LOG else echo "Forward server is configured" | tee -a $STATUS_LOG fi fi echo "---------------------------------" | tee -a $STATUS_LOG done echo "Done"  
Hi,  my env is like - UF->HF->IDX Cluster I have many errors on my HF that it can't received the data some are like: "ERROR TcpInputProc - Message rejected. Received unexpected message of size=36... See more...
Hi,  my env is like - UF->HF->IDX Cluster I have many errors on my HF that it can't received the data some are like: "ERROR TcpInputProc - Message rejected. Received unexpected message of size=369295616 bytes from src=xxxx:xxxx in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload."   and some are like "ERROR TcpInputProc - Encountered Streaming S2S error = Received reference to unknown channel_code=1 for data received from src=xxx:xxx"   any help?
HI Team, how to write search query for cpu & memory utilization    please help on this    thanks
Hi , We have Splunk Website monitoring 2.6 in Splunk Enterprise version 7.2.6, All of sudden I observed that my website monitoring summary page is blank and no URL's are being monitored. Kindly plea... See more...
Hi , We have Splunk Website monitoring 2.6 in Splunk Enterprise version 7.2.6, All of sudden I observed that my website monitoring summary page is blank and no URL's are being monitored. Kindly please help
Data: {"Field1":"xxx","message1":"{0}","message2":"xxx","message3":{"TEXT":"xxxx: xxx\r\n.xxxxx: {\"xxxxx\":{\"@CDI\":\"@ABC-123G-dhskdd-ghdkshd122@hkfhksdf12-djkshd12-hkdshd12 \",\"@RETURN\":\"xxxx-... See more...
Data: {"Field1":"xxx","message1":"{0}","message2":"xxx","message3":{"TEXT":"xxxx: xxx\r\n.xxxxx: {\"xxxxx\":{\"@CDI\":\"@ABC-123G-dhskdd-ghdkshd122@hkfhksdf12-djkshd12-hkdshd12 \",\"@RETURN\":\"xxxx-xxxxxxxxxx-xx-xxxxx\",\"@message4\":\"xxxxxx:xxx\",\"message5\":{\"message6............   Want to extract new field highlighted above but not getting any result.    This is what I tried: | rex field=_raw "RETURN\\\"\:\\\"(?<Field2>[^\\]+)"  
I basically have a long playbook consisting of sub-playbooks. I have 5 artifacts in a container I am using, where 4 will be dropped via 4 different decision actions and posted to a Confluent topic. T... See more...
I basically have a long playbook consisting of sub-playbooks. I have 5 artifacts in a container I am using, where 4 will be dropped via 4 different decision actions and posted to a Confluent topic. The final artifact will make it through to the end of the playbook and also be posted in a Confluent topic. When I run each artifact individually, they work perfectly. However, when I try to run "all artifacts (5 in the container)" to simulate the artifacts coming in at the same time, they are each posted 5 times in the Confluent topic, totaling 25 instead of just 5. I have two hunches as to where the problem might be; one where the phantom.decision() is evaluating to True, despite only one artifact matching that criterion and just posting all 5 instead of 1 artifact. The other is that there is no "end" after my Post actions, so each artifact is being posted to Confluent, but then also continuing to the next Playbook against my intentions. I have no idea what is causing this and haven't found much in terms of documentation for my issue. I just find it annoying that they will work perfectly fine individually but the opposite when called together. This might be how it is designed to be, or probably that I'm doing something simply incorrectly, but any help regarding this would be greatly appreciated!
Where can I find the HL7 add on for Splunk? We created a solution around this for healthcare field. We now have an official go ahead for a POC with Splunk in Asia. We need HL7 add on. Can you pleas... See more...
Where can I find the HL7 add on for Splunk? We created a solution around this for healthcare field. We now have an official go ahead for a POC with Splunk in Asia. We need HL7 add on. Can you please help us? Thanks, Sanjay
Hello, I try to learn splunk and thatfor I have setup a demo-version in my home-lab on the Linux system... Actually I have splunk running and I added the local files. Then I activated port 9997 and ... See more...
Hello, I try to learn splunk and thatfor I have setup a demo-version in my home-lab on the Linux system... Actually I have splunk running and I added the local files. Then I activated port 9997 and installed a universal forwarder on my Windows 10 PC. I can see on Linux with tcpdump that I get packages on port 9997 but I can't get the data into splunk! When I try to add data from a forwarder manually then I see the message that I have actually not forwarders configured... What am I doing wrong?
A custom JavaScript error caused an issue loading your dashboard. I'm experiencing this error on both the palo alto networks app and add-on app; I'm unsure why reporting is no longer ingesting data.... See more...
A custom JavaScript error caused an issue loading your dashboard. I'm experiencing this error on both the palo alto networks app and add-on app; I'm unsure why reporting is no longer ingesting data. Thanks    
I need to run a daily ldap search that will grab only the accounts that have change in the last 2 days. I can hard code a data into the whenChanged attribute.          | ldapsearch search="(&(ob... See more...
I need to run a daily ldap search that will grab only the accounts that have change in the last 2 days. I can hard code a data into the whenChanged attribute.          | ldapsearch search="(&(objectClass=user)(whenChanged>=20230817202220.0Z)(!(objectClass=computer)))" |table cn whenChanged whenCreated         I am trying to make whenChanged into a last 2 days variable that will work with ldap search.  I can create a whenChanged using:       |makeresults |eval whenChanged=strftime(relative_time(now(),"-2d@d"),"%Y%m%d%H%M%S.0Z")|fields - _time         I could use the help getting that dynamic value into the ldap search so that I am looking for the >= values of whenChanged
In Step 2 "Add the Dataset" of "Create Anomaly Job" within the Splunk App for Anomaly Detection, when running the following SPL, we get the warning-        index=wineventlog_security | timechart ... See more...
In Step 2 "Add the Dataset" of "Create Anomaly Job" within the Splunk App for Anomaly Detection, when running the following SPL, we get the warning-        index=wineventlog_security | timechart count "Could not load lookup=LOOKUP-HTTP_STATUS No matching fields exist."         What can it be? We use the following versions - Splunk App for Anomaly Detection - 1.1.0 Python for Scientific Computing  - 4.1.2  Splunk Machine Learning Toolkit  - 5.4.0
Good afternoon, I am receiving a number of events in splunk soar from splunk, I have a playbook that is executed for each event, however I am wondering if the execution of the playbook in each event ... See more...
Good afternoon, I am receiving a number of events in splunk soar from splunk, I have a playbook that is executed for each event, however I am wondering if the execution of the playbook in each event is in sequence or if it executes simultaneously in each event. I need that when receiving 3 events, the playbook is executed first in 1, then in 2 and finally in three, and from what I've seen soar executes the playbook in disorder for example 3, 1, 2. I would appreciate if anyone has any information on this.
Interested in getting live help from a Splunk expert? Register here for our upcoming session on Splunk IT Service Intelligence (ITSI) on Wed, September 13, 2023 at 1pm PT / 4pm ET.  This is your opp... See more...
Interested in getting live help from a Splunk expert? Register here for our upcoming session on Splunk IT Service Intelligence (ITSI) on Wed, September 13, 2023 at 1pm PT / 4pm ET.  This is your opportunity to ask questions related to your specific ITSI challenge or use case, including: ITSI installation and troubleshooting, including Splunk Content Packs  Implementing ITSI use cases and procedures How to organize and correlate events Using machine learning for predictive alerting How to maintain accurate & up-to-date service maps Creating ITSI Glass Tables, leveraging performance dashboards (e.g., Episode Review), and anything else you’d like to learn! Check out Community Office Hours for a list of all upcoming sessions. Join the #office-hours user Slack channel to ask questions and join the fun (request access here). 
I am trying to filter multiple values from two fields but not getting the expected result. index=test_01 EventCode=4670 NOT (Field 1 = value1 OR Field 1 = value2) NOT (Process_Name = value 3 OR Proc... See more...
I am trying to filter multiple values from two fields but not getting the expected result. index=test_01 EventCode=4670 NOT (Field 1 = value1 OR Field 1 = value2) NOT (Process_Name = value 3 OR Process_Name = value 4)   I am geting splunk results which includes Process_Name=value 3 and Process_Name=value 4
Howdy Splunkers,   Working on my Splunk deployment and ran into a funky issue. I am ingesting Palo Alto FW and Meraki network device logs via syslog server. Rsyslog is set to write logs down to a f... See more...
Howdy Splunkers,   Working on my Splunk deployment and ran into a funky issue. I am ingesting Palo Alto FW and Meraki network device logs via syslog server. Rsyslog is set to write logs down to a file and the UF is set to monitor the directories.   No issues there, however I do run into an issue why I try to source type or set an index for these logs. I have edited the indexes.conf in the local folder on my cluster manager and pushed the required indexes to my indexers.  When I go to search for the logs on my search head I cannot find any data. However it works properly whenever i do not have sourcetyping and index destination in my inputs.conf. Any idea as to why?
test_id": "CHICKEN-0123456", "last_test_date": "2023-09-04 12:34:00"   with such above file and todays date 09/25/2023   once it is monitored by the splunk, I cannot search this data with th... See more...
test_id": "CHICKEN-0123456", "last_test_date": "2023-09-04 12:34:00"   with such above file and todays date 09/25/2023   once it is monitored by the splunk, I cannot search this data with the 'current' date or even current time; 15 or 60mintues.   instead it tends to read the dates off of the file which is the 'last test date' = 09/24/2023 therefore from the search I have to put either on that day or 1day to find the data.   Props.conf currently set as  DATETIME_CONFIG = CURRENT   I want the file to be 'read' today if it was uploaded today. (or 15 min if it was uploaded within 15min) NOT going off of the date in the file.   Gurus hop in plesae.
Hi All, I am looking for a SPL query to generate the SLA metrics KPI dashboard for incidents in Splunk Mission Control. The dashboard should contain SLA status (met/not-met) and the Analyst assigne... See more...
Hi All, I am looking for a SPL query to generate the SLA metrics KPI dashboard for incidents in Splunk Mission Control. The dashboard should contain SLA status (met/not-met) and the Analyst assigned to the incident. Thank You
Hello, Does "WHERE" SQL clause have the same row limitation as "INNER JOIN"? Does "WHERE" and "INNER JOIN" have the same function and result? Thank you for your help For example: | dbxquery co... See more...
Hello, Does "WHERE" SQL clause have the same row limitation as "INNER JOIN"? Does "WHERE" and "INNER JOIN" have the same function and result? Thank you for your help For example: | dbxquery connection=DBtest query="SELECT a.name, b.department FROM tableEmployee a INNER JOIN tableCompany b ON a.id = b.emp_id | dbxquery connection=DBtest query="SELECT a.name, b.department FROM tableEmployee a, tableCompany b WHERE a.id = b.emp_id
Hi,  I'm trying to create a filter based on a threshold value that is unique for some objects and fixed for the others. index=main | loopup thresholds_table.csv object output threshold | ... See more...
Hi,  I'm trying to create a filter based on a threshold value that is unique for some objects and fixed for the others. index=main | loopup thresholds_table.csv object output threshold | where number > threshold   The lookup contains something like: object threshold chair    20 pencil  40   The problem here is that no all objects are inside the lookup, so I want to fix a threshold number for all other objects, for example I want to fix a threshold of 10 for every object except for those inside the lookup. I tried these things without success: index=main | loopup thresholds_table.csv object output threshold | eval threshold = coalesce(threshold, 10) | where number > threshold index=main | fillnull value=10 threshold | loopup thresholds_table.csv object output threshold | where number > threshold index=main | eval threshold = 10 | loopup thresholds_table.csv object output threshold | where number > threshold   The objective is identify when an object reach an X average value, except for those objects that have a higher average value.  
I am trying to create a timeline dashboard that shows the number of events for a specific user over the last 7 days (x-axis being _time and y-axis being the number of events). We do not have a field ... See more...
I am trying to create a timeline dashboard that shows the number of events for a specific user over the last 7 days (x-axis being _time and y-axis being the number of events). We do not have a field option for individual users yet. The syntax I have here will show a nice timeline from Search in Splunk but when I try to create a dashboard line chart for it, I either get nothing or mismatching info. Syntax I use for search: index="myindex1" OSPath="C:\\Users\\Snyder\\*".